00:00:00.001 Started by upstream project "autotest-per-patch" build number 132828 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.293 > git --version # 'git version 2.39.2' 00:00:00.293 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.345 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.345 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.782 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.794 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.807 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.807 > git config core.sparsecheckout # timeout=10 00:00:04.819 > git read-tree -mu HEAD # timeout=10 00:00:04.835 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.852 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.852 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.958 [Pipeline] Start of Pipeline 00:00:04.972 [Pipeline] library 00:00:04.974 Loading library shm_lib@master 00:00:04.974 Library shm_lib@master is cached. Copying from home. 00:00:04.991 [Pipeline] node 00:00:05.000 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.001 [Pipeline] { 00:00:05.011 [Pipeline] catchError 00:00:05.013 [Pipeline] { 00:00:05.022 [Pipeline] wrap 00:00:05.028 [Pipeline] { 00:00:05.033 [Pipeline] stage 00:00:05.034 [Pipeline] { (Prologue) 00:00:05.044 [Pipeline] echo 00:00:05.045 Node: VM-host-SM9 00:00:05.049 [Pipeline] cleanWs 00:00:05.056 [WS-CLEANUP] Deleting project workspace... 00:00:05.056 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.062 [WS-CLEANUP] done 00:00:05.304 [Pipeline] setCustomBuildProperty 00:00:05.392 [Pipeline] httpRequest 00:00:06.262 [Pipeline] echo 00:00:06.263 Sorcerer 10.211.164.112 is alive 00:00:06.274 [Pipeline] retry 00:00:06.276 [Pipeline] { 00:00:06.291 [Pipeline] httpRequest 00:00:06.295 HttpMethod: GET 00:00:06.296 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.296 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.302 Response Code: HTTP/1.1 200 OK 00:00:06.303 Success: Status code 200 is in the accepted range: 200,404 00:00:06.304 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.628 [Pipeline] } 00:00:27.645 [Pipeline] // retry 00:00:27.653 [Pipeline] sh 00:00:27.933 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.948 [Pipeline] httpRequest 00:00:28.410 [Pipeline] echo 00:00:28.412 Sorcerer 10.211.164.112 is alive 00:00:28.423 [Pipeline] retry 00:00:28.425 [Pipeline] { 00:00:28.439 [Pipeline] httpRequest 00:00:28.444 HttpMethod: GET 00:00:28.445 URL: http://10.211.164.112/packages/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:00:28.445 Sending request to url: http://10.211.164.112/packages/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:00:28.452 Response Code: HTTP/1.1 200 OK 00:00:28.453 Success: Status code 200 is in the accepted range: 200,404 00:00:28.454 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:02:18.366 [Pipeline] } 00:02:18.384 [Pipeline] // retry 00:02:18.391 [Pipeline] sh 00:02:18.671 + tar --no-same-owner -xf spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:02:21.971 [Pipeline] sh 00:02:22.257 + git -C spdk log --oneline -n5 00:02:22.257 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:02:22.257 52a413487 bdev: do not retry nomem I/Os during aborting them 00:02:22.257 d13942918 bdev: simplify bdev_reset_freeze_channel 00:02:22.257 0edc184ec accel/mlx5: Support mkey registration 00:02:22.257 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:02:22.275 [Pipeline] writeFile 00:02:22.290 [Pipeline] sh 00:02:22.571 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:22.583 [Pipeline] sh 00:02:22.869 + cat autorun-spdk.conf 00:02:22.869 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.869 SPDK_TEST_NVME=1 00:02:22.869 SPDK_TEST_FTL=1 00:02:22.869 SPDK_TEST_ISAL=1 00:02:22.869 SPDK_RUN_ASAN=1 00:02:22.869 SPDK_RUN_UBSAN=1 00:02:22.869 SPDK_TEST_XNVME=1 00:02:22.869 SPDK_TEST_NVME_FDP=1 00:02:22.869 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.876 RUN_NIGHTLY=0 00:02:22.878 [Pipeline] } 00:02:22.891 [Pipeline] // stage 00:02:22.906 [Pipeline] stage 00:02:22.908 [Pipeline] { (Run VM) 00:02:22.920 [Pipeline] sh 00:02:23.198 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:23.198 + echo 'Start stage prepare_nvme.sh' 00:02:23.198 Start stage prepare_nvme.sh 00:02:23.198 + [[ -n 1 ]] 00:02:23.198 + disk_prefix=ex1 00:02:23.198 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:23.198 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:23.198 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:23.198 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.198 ++ SPDK_TEST_NVME=1 00:02:23.198 ++ SPDK_TEST_FTL=1 00:02:23.198 ++ SPDK_TEST_ISAL=1 00:02:23.198 ++ SPDK_RUN_ASAN=1 00:02:23.198 ++ SPDK_RUN_UBSAN=1 00:02:23.198 ++ SPDK_TEST_XNVME=1 00:02:23.198 ++ SPDK_TEST_NVME_FDP=1 00:02:23.198 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.198 ++ RUN_NIGHTLY=0 00:02:23.198 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:23.198 + nvme_files=() 00:02:23.198 + declare -A nvme_files 00:02:23.198 + backend_dir=/var/lib/libvirt/images/backends 00:02:23.198 + nvme_files['nvme.img']=5G 00:02:23.198 + nvme_files['nvme-cmb.img']=5G 00:02:23.198 + nvme_files['nvme-multi0.img']=4G 00:02:23.198 + nvme_files['nvme-multi1.img']=4G 00:02:23.198 + nvme_files['nvme-multi2.img']=4G 00:02:23.198 + nvme_files['nvme-openstack.img']=8G 00:02:23.198 + nvme_files['nvme-zns.img']=5G 00:02:23.198 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:23.198 + (( SPDK_TEST_FTL == 1 )) 00:02:23.198 + nvme_files["nvme-ftl.img"]=6G 00:02:23.198 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:23.198 + nvme_files["nvme-fdp.img"]=1G 00:02:23.198 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:23.198 + for nvme in "${!nvme_files[@]}" 00:02:23.198 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:23.198 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.198 + for nvme in "${!nvme_files[@]}" 00:02:23.198 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:23.198 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:23.198 + for nvme in "${!nvme_files[@]}" 00:02:23.198 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:23.198 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.198 + for nvme in "${!nvme_files[@]}" 00:02:23.198 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:23.198 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:23.198 + for nvme in "${!nvme_files[@]}" 00:02:23.198 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:23.456 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.456 + for nvme in "${!nvme_files[@]}" 00:02:23.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:23.456 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.456 + for nvme in "${!nvme_files[@]}" 00:02:23.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:23.456 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.456 + for nvme in "${!nvme_files[@]}" 00:02:23.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:23.456 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:23.456 + for nvme in "${!nvme_files[@]}" 00:02:23.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:23.456 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.456 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:23.456 + echo 'End stage prepare_nvme.sh' 00:02:23.456 End stage prepare_nvme.sh 00:02:23.468 [Pipeline] sh 00:02:23.815 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:23.815 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:23.815 00:02:23.815 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:23.815 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:23.815 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:23.815 HELP=0 00:02:23.815 DRY_RUN=0 00:02:23.815 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:23.815 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:23.815 NVME_AUTO_CREATE=0 00:02:23.815 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:23.815 NVME_CMB=,,,, 00:02:23.815 NVME_PMR=,,,, 00:02:23.815 NVME_ZNS=,,,, 00:02:23.815 NVME_MS=true,,,, 00:02:23.815 NVME_FDP=,,,on, 00:02:23.815 SPDK_VAGRANT_DISTRO=fedora39 00:02:23.815 SPDK_VAGRANT_VMCPU=10 00:02:23.815 SPDK_VAGRANT_VMRAM=12288 00:02:23.815 SPDK_VAGRANT_PROVIDER=libvirt 00:02:23.815 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:23.815 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:23.815 SPDK_OPENSTACK_NETWORK=0 00:02:23.815 VAGRANT_PACKAGE_BOX=0 00:02:23.815 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:23.815 FORCE_DISTRO=true 00:02:23.815 VAGRANT_BOX_VERSION= 00:02:23.815 EXTRA_VAGRANTFILES= 00:02:23.815 NIC_MODEL=e1000 00:02:23.815 00:02:23.815 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:23.815 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:28.000 Bringing machine 'default' up with 'libvirt' provider... 00:02:28.000 ==> default: Creating image (snapshot of base box volume). 00:02:28.000 ==> default: Creating domain with the following settings... 00:02:28.000 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733829049_090683a08b61f6c26301 00:02:28.000 ==> default: -- Domain type: kvm 00:02:28.000 ==> default: -- Cpus: 10 00:02:28.000 ==> default: -- Feature: acpi 00:02:28.000 ==> default: -- Feature: apic 00:02:28.000 ==> default: -- Feature: pae 00:02:28.000 ==> default: -- Memory: 12288M 00:02:28.000 ==> default: -- Memory Backing: hugepages: 00:02:28.000 ==> default: -- Management MAC: 00:02:28.000 ==> default: -- Loader: 00:02:28.000 ==> default: -- Nvram: 00:02:28.000 ==> default: -- Base box: spdk/fedora39 00:02:28.000 ==> default: -- Storage pool: default 00:02:28.000 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733829049_090683a08b61f6c26301.img (20G) 00:02:28.000 ==> default: -- Volume Cache: default 00:02:28.000 ==> default: -- Kernel: 00:02:28.000 ==> default: -- Initrd: 00:02:28.000 ==> default: -- Graphics Type: vnc 00:02:28.000 ==> default: -- Graphics Port: -1 00:02:28.000 ==> default: -- Graphics IP: 127.0.0.1 00:02:28.000 ==> default: -- Graphics Password: Not defined 00:02:28.000 ==> default: -- Video Type: cirrus 00:02:28.000 ==> default: -- Video VRAM: 9216 00:02:28.000 ==> default: -- Sound Type: 00:02:28.000 ==> default: -- Keymap: en-us 00:02:28.000 ==> default: -- TPM Path: 00:02:28.000 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:28.000 ==> default: -- Command line args: 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:28.000 ==> default: -> value=-drive, 00:02:28.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:28.000 ==> default: -> value=-device, 00:02:28.000 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.260 ==> default: Creating shared folders metadata... 00:02:28.260 ==> default: Starting domain. 00:02:29.638 ==> default: Waiting for domain to get an IP address... 00:02:47.721 ==> default: Waiting for SSH to become available... 00:02:48.656 ==> default: Configuring and enabling network interfaces... 00:02:52.839 default: SSH address: 192.168.121.16:22 00:02:52.839 default: SSH username: vagrant 00:02:52.839 default: SSH auth method: private key 00:02:55.371 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:03.487 ==> default: Mounting SSHFS shared folder... 00:03:04.471 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:04.471 ==> default: Checking Mount.. 00:03:05.406 ==> default: Folder Successfully Mounted! 00:03:05.406 ==> default: Running provisioner: file... 00:03:06.341 default: ~/.gitconfig => .gitconfig 00:03:06.598 00:03:06.598 SUCCESS! 00:03:06.598 00:03:06.598 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:06.598 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:06.598 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:06.598 00:03:06.606 [Pipeline] } 00:03:06.621 [Pipeline] // stage 00:03:06.630 [Pipeline] dir 00:03:06.630 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:06.632 [Pipeline] { 00:03:06.645 [Pipeline] catchError 00:03:06.648 [Pipeline] { 00:03:06.661 [Pipeline] sh 00:03:06.941 + vagrant ssh-config --host vagrant 00:03:06.941 + tee ssh_conf 00:03:06.941 + sed -ne /^Host/,$p 00:03:11.170 Host vagrant 00:03:11.170 HostName 192.168.121.16 00:03:11.170 User vagrant 00:03:11.170 Port 22 00:03:11.170 UserKnownHostsFile /dev/null 00:03:11.170 StrictHostKeyChecking no 00:03:11.170 PasswordAuthentication no 00:03:11.170 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:11.170 IdentitiesOnly yes 00:03:11.170 LogLevel FATAL 00:03:11.170 ForwardAgent yes 00:03:11.170 ForwardX11 yes 00:03:11.170 00:03:11.185 [Pipeline] withEnv 00:03:11.187 [Pipeline] { 00:03:11.201 [Pipeline] sh 00:03:11.482 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:11.483 source /etc/os-release 00:03:11.483 [[ -e /image.version ]] && img=$(< /image.version) 00:03:11.483 # Minimal, systemd-like check. 00:03:11.483 if [[ -e /.dockerenv ]]; then 00:03:11.483 # Clear garbage from the node's name: 00:03:11.483 # agt-er_autotest_547-896 -> autotest_547-896 00:03:11.483 # $HOSTNAME is the actual container id 00:03:11.483 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:11.483 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:11.483 # We can assume this is a mount from a host where container is running, 00:03:11.483 # so fetch its hostname to easily identify the target swarm worker. 00:03:11.483 container="$(< /etc/hostname) ($agent)" 00:03:11.483 else 00:03:11.483 # Fallback 00:03:11.483 container=$agent 00:03:11.483 fi 00:03:11.483 fi 00:03:11.483 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:11.483 00:03:11.493 [Pipeline] } 00:03:11.509 [Pipeline] // withEnv 00:03:11.517 [Pipeline] setCustomBuildProperty 00:03:11.533 [Pipeline] stage 00:03:11.535 [Pipeline] { (Tests) 00:03:11.551 [Pipeline] sh 00:03:11.830 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:12.102 [Pipeline] sh 00:03:12.394 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:12.408 [Pipeline] timeout 00:03:12.409 Timeout set to expire in 50 min 00:03:12.411 [Pipeline] { 00:03:12.426 [Pipeline] sh 00:03:12.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:13.270 HEAD is now at 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:03:13.281 [Pipeline] sh 00:03:13.558 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:13.829 [Pipeline] sh 00:03:14.107 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:14.122 [Pipeline] sh 00:03:14.402 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:14.661 ++ readlink -f spdk_repo 00:03:14.661 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:14.661 + [[ -n /home/vagrant/spdk_repo ]] 00:03:14.661 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:14.661 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:14.661 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:14.661 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:14.661 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:14.661 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:14.661 + cd /home/vagrant/spdk_repo 00:03:14.661 + source /etc/os-release 00:03:14.661 ++ NAME='Fedora Linux' 00:03:14.661 ++ VERSION='39 (Cloud Edition)' 00:03:14.661 ++ ID=fedora 00:03:14.661 ++ VERSION_ID=39 00:03:14.661 ++ VERSION_CODENAME= 00:03:14.661 ++ PLATFORM_ID=platform:f39 00:03:14.661 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:14.661 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:14.661 ++ LOGO=fedora-logo-icon 00:03:14.661 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:14.661 ++ HOME_URL=https://fedoraproject.org/ 00:03:14.661 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:14.661 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:14.661 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:14.661 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:14.661 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:14.661 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:14.661 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:14.661 ++ SUPPORT_END=2024-11-12 00:03:14.661 ++ VARIANT='Cloud Edition' 00:03:14.661 ++ VARIANT_ID=cloud 00:03:14.661 + uname -a 00:03:14.661 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:14.661 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:14.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:15.177 Hugepages 00:03:15.177 node hugesize free / total 00:03:15.177 node0 1048576kB 0 / 0 00:03:15.177 node0 2048kB 0 / 0 00:03:15.177 00:03:15.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.178 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:15.178 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:15.178 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:15.436 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:15.436 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:15.436 + rm -f /tmp/spdk-ld-path 00:03:15.436 + source autorun-spdk.conf 00:03:15.436 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.436 ++ SPDK_TEST_NVME=1 00:03:15.436 ++ SPDK_TEST_FTL=1 00:03:15.436 ++ SPDK_TEST_ISAL=1 00:03:15.436 ++ SPDK_RUN_ASAN=1 00:03:15.436 ++ SPDK_RUN_UBSAN=1 00:03:15.436 ++ SPDK_TEST_XNVME=1 00:03:15.436 ++ SPDK_TEST_NVME_FDP=1 00:03:15.436 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.436 ++ RUN_NIGHTLY=0 00:03:15.436 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:15.436 + [[ -n '' ]] 00:03:15.436 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:15.436 + for M in /var/spdk/build-*-manifest.txt 00:03:15.436 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:15.436 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.436 + for M in /var/spdk/build-*-manifest.txt 00:03:15.436 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:15.436 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.436 + for M in /var/spdk/build-*-manifest.txt 00:03:15.436 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:15.436 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.436 ++ uname 00:03:15.436 + [[ Linux == \L\i\n\u\x ]] 00:03:15.436 + sudo dmesg -T 00:03:15.436 + sudo dmesg --clear 00:03:15.436 + dmesg_pid=5308 00:03:15.436 + [[ Fedora Linux == FreeBSD ]] 00:03:15.436 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.436 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.436 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:15.436 + [[ -x /usr/src/fio-static/fio ]] 00:03:15.436 + sudo dmesg -Tw 00:03:15.436 + export FIO_BIN=/usr/src/fio-static/fio 00:03:15.436 + FIO_BIN=/usr/src/fio-static/fio 00:03:15.436 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:15.436 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:15.436 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:15.436 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.436 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.436 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:15.436 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.436 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.436 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.436 11:11:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:15.436 11:11:37 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.436 11:11:37 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:15.436 11:11:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:15.436 11:11:37 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.694 11:11:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:15.694 11:11:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:15.694 11:11:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:15.694 11:11:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:15.694 11:11:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.694 11:11:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.694 11:11:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.694 11:11:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.695 11:11:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.695 11:11:37 -- paths/export.sh@5 -- $ export PATH 00:03:15.695 11:11:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.695 11:11:37 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:15.695 11:11:37 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:15.695 11:11:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733829097.XXXXXX 00:03:15.695 11:11:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733829097.94uPrF 00:03:15.695 11:11:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:15.695 11:11:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:15.695 11:11:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:15.695 11:11:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:15.695 11:11:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:15.695 11:11:37 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:15.695 11:11:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:15.695 11:11:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.695 11:11:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:15.695 11:11:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:15.695 11:11:37 -- pm/common@17 -- $ local monitor 00:03:15.695 11:11:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.695 11:11:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.695 11:11:37 -- pm/common@25 -- $ sleep 1 00:03:15.695 11:11:37 -- pm/common@21 -- $ date +%s 00:03:15.695 11:11:37 -- pm/common@21 -- $ date +%s 00:03:15.695 11:11:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733829097 00:03:15.695 11:11:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733829097 00:03:15.695 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733829097_collect-cpu-load.pm.log 00:03:15.695 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733829097_collect-vmstat.pm.log 00:03:16.629 11:11:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:16.629 11:11:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:16.629 11:11:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:16.629 11:11:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:16.629 11:11:38 -- spdk/autobuild.sh@16 -- $ date -u 00:03:16.629 Tue Dec 10 11:11:38 AM UTC 2024 00:03:16.629 11:11:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:16.629 v25.01-pre-325-g92d1e663a 00:03:16.629 11:11:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:16.629 11:11:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:16.629 11:11:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.629 11:11:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.629 11:11:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.629 ************************************ 00:03:16.629 START TEST asan 00:03:16.629 ************************************ 00:03:16.629 using asan 00:03:16.629 11:11:38 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:16.629 00:03:16.629 real 0m0.000s 00:03:16.629 user 0m0.000s 00:03:16.629 sys 0m0.000s 00:03:16.629 11:11:38 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:16.629 ************************************ 00:03:16.629 END TEST asan 00:03:16.630 11:11:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.630 ************************************ 00:03:16.630 11:11:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:16.630 11:11:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:16.630 11:11:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.630 11:11:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.630 11:11:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.630 ************************************ 00:03:16.630 START TEST ubsan 00:03:16.630 ************************************ 00:03:16.630 using ubsan 00:03:16.630 11:11:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:16.630 00:03:16.630 real 0m0.000s 00:03:16.630 user 0m0.000s 00:03:16.630 sys 0m0.000s 00:03:16.630 11:11:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:16.630 11:11:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.630 ************************************ 00:03:16.630 END TEST ubsan 00:03:16.630 ************************************ 00:03:16.630 11:11:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:16.630 11:11:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.630 11:11:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.630 11:11:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:16.888 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:16.888 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:17.147 Using 'verbs' RDMA provider 00:03:30.297 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:42.499 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:42.758 Creating mk/config.mk...done. 00:03:42.758 Creating mk/cc.flags.mk...done. 00:03:42.758 Type 'make' to build. 00:03:42.758 11:12:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:42.758 11:12:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:42.758 11:12:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:42.758 11:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.758 ************************************ 00:03:42.758 START TEST make 00:03:42.758 ************************************ 00:03:42.758 11:12:04 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:43.017 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:43.017 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:43.017 meson setup builddir \ 00:03:43.017 -Dwith-libaio=enabled \ 00:03:43.017 -Dwith-liburing=enabled \ 00:03:43.017 -Dwith-libvfn=disabled \ 00:03:43.017 -Dwith-spdk=disabled \ 00:03:43.017 -Dexamples=false \ 00:03:43.017 -Dtests=false \ 00:03:43.017 -Dtools=false && \ 00:03:43.017 meson compile -C builddir && \ 00:03:43.017 cd -) 00:03:43.017 make[1]: Nothing to be done for 'all'. 00:03:47.200 The Meson build system 00:03:47.200 Version: 1.5.0 00:03:47.200 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:47.200 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:47.200 Build type: native build 00:03:47.200 Project name: xnvme 00:03:47.200 Project version: 0.7.5 00:03:47.200 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:47.200 C linker for the host machine: cc ld.bfd 2.40-14 00:03:47.200 Host machine cpu family: x86_64 00:03:47.200 Host machine cpu: x86_64 00:03:47.200 Message: host_machine.system: linux 00:03:47.200 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:47.200 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:47.200 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:47.200 Run-time dependency threads found: YES 00:03:47.200 Has header "setupapi.h" : NO 00:03:47.200 Has header "linux/blkzoned.h" : YES 00:03:47.200 Has header "linux/blkzoned.h" : YES (cached) 00:03:47.200 Has header "libaio.h" : YES 00:03:47.200 Library aio found: YES 00:03:47.200 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:47.200 Run-time dependency liburing found: YES 2.2 00:03:47.200 Dependency libvfn skipped: feature with-libvfn disabled 00:03:47.200 Found CMake: /usr/bin/cmake (3.27.7) 00:03:47.200 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:47.200 Subproject spdk : skipped: feature with-spdk disabled 00:03:47.200 Run-time dependency appleframeworks found: NO (tried framework) 00:03:47.200 Run-time dependency appleframeworks found: NO (tried framework) 00:03:47.200 Library rt found: YES 00:03:47.200 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:47.200 Configuring xnvme_config.h using configuration 00:03:47.200 Configuring xnvme.spec using configuration 00:03:47.200 Run-time dependency bash-completion found: YES 2.11 00:03:47.200 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:47.200 Program cp found: YES (/usr/bin/cp) 00:03:47.200 Build targets in project: 3 00:03:47.200 00:03:47.200 xnvme 0.7.5 00:03:47.200 00:03:47.200 Subprojects 00:03:47.200 spdk : NO Feature 'with-spdk' disabled 00:03:47.200 00:03:47.200 User defined options 00:03:47.200 examples : false 00:03:47.200 tests : false 00:03:47.200 tools : false 00:03:47.200 with-libaio : enabled 00:03:47.200 with-liburing: enabled 00:03:47.200 with-libvfn : disabled 00:03:47.200 with-spdk : disabled 00:03:47.200 00:03:47.200 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:47.458 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:47.458 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:47.716 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:47.716 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:47.716 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:47.716 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:47.716 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:47.716 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:47.716 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:47.716 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:47.716 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:47.716 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:47.974 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:47.974 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:47.974 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:47.974 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:47.974 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:47.974 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:47.974 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:47.974 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:47.974 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:47.974 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:47.974 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:47.974 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:47.974 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:47.974 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:47.974 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:47.974 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:48.232 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:48.232 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:48.232 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:48.232 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:48.232 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:48.232 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:48.232 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:48.232 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:48.232 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:48.232 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:48.232 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:48.232 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:48.232 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:48.232 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:48.232 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:48.232 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:48.232 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:48.232 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:48.232 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:48.232 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:48.232 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:48.490 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:48.490 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:48.490 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:48.490 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:48.490 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:48.490 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:48.490 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:48.490 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:48.490 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:48.490 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:48.490 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:48.490 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:48.490 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:48.748 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:48.748 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:48.748 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:48.748 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:48.748 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:48.748 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:48.748 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:49.006 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:49.006 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:49.006 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:49.006 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:49.264 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:49.829 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:49.829 [75/76] Linking static target lib/libxnvme.a 00:03:49.829 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:49.829 INFO: autodetecting backend as ninja 00:03:49.829 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:49.829 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:04.826 The Meson build system 00:04:04.826 Version: 1.5.0 00:04:04.826 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:04.826 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:04.826 Build type: native build 00:04:04.826 Program cat found: YES (/usr/bin/cat) 00:04:04.826 Project name: DPDK 00:04:04.826 Project version: 24.03.0 00:04:04.826 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:04.826 C linker for the host machine: cc ld.bfd 2.40-14 00:04:04.826 Host machine cpu family: x86_64 00:04:04.826 Host machine cpu: x86_64 00:04:04.826 Message: ## Building in Developer Mode ## 00:04:04.826 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:04.826 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:04.826 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:04.826 Program python3 found: YES (/usr/bin/python3) 00:04:04.826 Program cat found: YES (/usr/bin/cat) 00:04:04.826 Compiler for C supports arguments -march=native: YES 00:04:04.826 Checking for size of "void *" : 8 00:04:04.826 Checking for size of "void *" : 8 (cached) 00:04:04.826 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:04.826 Library m found: YES 00:04:04.826 Library numa found: YES 00:04:04.826 Has header "numaif.h" : YES 00:04:04.826 Library fdt found: NO 00:04:04.826 Library execinfo found: NO 00:04:04.826 Has header "execinfo.h" : YES 00:04:04.826 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:04.826 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:04.826 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:04.826 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:04.826 Run-time dependency openssl found: YES 3.1.1 00:04:04.826 Run-time dependency libpcap found: YES 1.10.4 00:04:04.826 Has header "pcap.h" with dependency libpcap: YES 00:04:04.826 Compiler for C supports arguments -Wcast-qual: YES 00:04:04.826 Compiler for C supports arguments -Wdeprecated: YES 00:04:04.826 Compiler for C supports arguments -Wformat: YES 00:04:04.826 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:04.826 Compiler for C supports arguments -Wformat-security: NO 00:04:04.826 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:04.826 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:04.826 Compiler for C supports arguments -Wnested-externs: YES 00:04:04.826 Compiler for C supports arguments -Wold-style-definition: YES 00:04:04.826 Compiler for C supports arguments -Wpointer-arith: YES 00:04:04.826 Compiler for C supports arguments -Wsign-compare: YES 00:04:04.826 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:04.826 Compiler for C supports arguments -Wundef: YES 00:04:04.826 Compiler for C supports arguments -Wwrite-strings: YES 00:04:04.826 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:04.826 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:04.826 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:04.826 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:04.826 Program objdump found: YES (/usr/bin/objdump) 00:04:04.826 Compiler for C supports arguments -mavx512f: YES 00:04:04.826 Checking if "AVX512 checking" compiles: YES 00:04:04.826 Fetching value of define "__SSE4_2__" : 1 00:04:04.826 Fetching value of define "__AES__" : 1 00:04:04.826 Fetching value of define "__AVX__" : 1 00:04:04.826 Fetching value of define "__AVX2__" : 1 00:04:04.826 Fetching value of define "__AVX512BW__" : (undefined) 00:04:04.826 Fetching value of define "__AVX512CD__" : (undefined) 00:04:04.826 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:04.826 Fetching value of define "__AVX512F__" : (undefined) 00:04:04.826 Fetching value of define "__AVX512VL__" : (undefined) 00:04:04.826 Fetching value of define "__PCLMUL__" : 1 00:04:04.826 Fetching value of define "__RDRND__" : 1 00:04:04.826 Fetching value of define "__RDSEED__" : 1 00:04:04.826 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:04.826 Fetching value of define "__znver1__" : (undefined) 00:04:04.826 Fetching value of define "__znver2__" : (undefined) 00:04:04.826 Fetching value of define "__znver3__" : (undefined) 00:04:04.826 Fetching value of define "__znver4__" : (undefined) 00:04:04.826 Library asan found: YES 00:04:04.826 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:04.826 Message: lib/log: Defining dependency "log" 00:04:04.826 Message: lib/kvargs: Defining dependency "kvargs" 00:04:04.826 Message: lib/telemetry: Defining dependency "telemetry" 00:04:04.826 Library rt found: YES 00:04:04.826 Checking for function "getentropy" : NO 00:04:04.826 Message: lib/eal: Defining dependency "eal" 00:04:04.826 Message: lib/ring: Defining dependency "ring" 00:04:04.826 Message: lib/rcu: Defining dependency "rcu" 00:04:04.826 Message: lib/mempool: Defining dependency "mempool" 00:04:04.826 Message: lib/mbuf: Defining dependency "mbuf" 00:04:04.826 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:04.826 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:04.826 Compiler for C supports arguments -mpclmul: YES 00:04:04.826 Compiler for C supports arguments -maes: YES 00:04:04.826 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:04.826 Compiler for C supports arguments -mavx512bw: YES 00:04:04.826 Compiler for C supports arguments -mavx512dq: YES 00:04:04.826 Compiler for C supports arguments -mavx512vl: YES 00:04:04.826 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:04.826 Compiler for C supports arguments -mavx2: YES 00:04:04.826 Compiler for C supports arguments -mavx: YES 00:04:04.826 Message: lib/net: Defining dependency "net" 00:04:04.826 Message: lib/meter: Defining dependency "meter" 00:04:04.827 Message: lib/ethdev: Defining dependency "ethdev" 00:04:04.827 Message: lib/pci: Defining dependency "pci" 00:04:04.827 Message: lib/cmdline: Defining dependency "cmdline" 00:04:04.827 Message: lib/hash: Defining dependency "hash" 00:04:04.827 Message: lib/timer: Defining dependency "timer" 00:04:04.827 Message: lib/compressdev: Defining dependency "compressdev" 00:04:04.827 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:04.827 Message: lib/dmadev: Defining dependency "dmadev" 00:04:04.827 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:04.827 Message: lib/power: Defining dependency "power" 00:04:04.827 Message: lib/reorder: Defining dependency "reorder" 00:04:04.827 Message: lib/security: Defining dependency "security" 00:04:04.827 Has header "linux/userfaultfd.h" : YES 00:04:04.827 Has header "linux/vduse.h" : YES 00:04:04.827 Message: lib/vhost: Defining dependency "vhost" 00:04:04.827 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:04.827 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:04.827 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:04.827 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:04.827 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:04.827 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:04.827 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:04.827 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:04.827 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:04.827 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:04.827 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:04.827 Configuring doxy-api-html.conf using configuration 00:04:04.827 Configuring doxy-api-man.conf using configuration 00:04:04.827 Program mandb found: YES (/usr/bin/mandb) 00:04:04.827 Program sphinx-build found: NO 00:04:04.827 Configuring rte_build_config.h using configuration 00:04:04.827 Message: 00:04:04.827 ================= 00:04:04.827 Applications Enabled 00:04:04.827 ================= 00:04:04.827 00:04:04.827 apps: 00:04:04.827 00:04:04.827 00:04:04.827 Message: 00:04:04.827 ================= 00:04:04.827 Libraries Enabled 00:04:04.827 ================= 00:04:04.827 00:04:04.827 libs: 00:04:04.827 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:04.827 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:04.827 cryptodev, dmadev, power, reorder, security, vhost, 00:04:04.827 00:04:04.827 Message: 00:04:04.827 =============== 00:04:04.827 Drivers Enabled 00:04:04.827 =============== 00:04:04.827 00:04:04.827 common: 00:04:04.827 00:04:04.827 bus: 00:04:04.827 pci, vdev, 00:04:04.827 mempool: 00:04:04.827 ring, 00:04:04.827 dma: 00:04:04.827 00:04:04.827 net: 00:04:04.827 00:04:04.827 crypto: 00:04:04.827 00:04:04.827 compress: 00:04:04.827 00:04:04.827 vdpa: 00:04:04.827 00:04:04.827 00:04:04.827 Message: 00:04:04.827 ================= 00:04:04.827 Content Skipped 00:04:04.827 ================= 00:04:04.827 00:04:04.827 apps: 00:04:04.827 dumpcap: explicitly disabled via build config 00:04:04.827 graph: explicitly disabled via build config 00:04:04.827 pdump: explicitly disabled via build config 00:04:04.827 proc-info: explicitly disabled via build config 00:04:04.827 test-acl: explicitly disabled via build config 00:04:04.827 test-bbdev: explicitly disabled via build config 00:04:04.827 test-cmdline: explicitly disabled via build config 00:04:04.827 test-compress-perf: explicitly disabled via build config 00:04:04.827 test-crypto-perf: explicitly disabled via build config 00:04:04.827 test-dma-perf: explicitly disabled via build config 00:04:04.827 test-eventdev: explicitly disabled via build config 00:04:04.827 test-fib: explicitly disabled via build config 00:04:04.827 test-flow-perf: explicitly disabled via build config 00:04:04.827 test-gpudev: explicitly disabled via build config 00:04:04.827 test-mldev: explicitly disabled via build config 00:04:04.827 test-pipeline: explicitly disabled via build config 00:04:04.827 test-pmd: explicitly disabled via build config 00:04:04.827 test-regex: explicitly disabled via build config 00:04:04.827 test-sad: explicitly disabled via build config 00:04:04.827 test-security-perf: explicitly disabled via build config 00:04:04.827 00:04:04.827 libs: 00:04:04.827 argparse: explicitly disabled via build config 00:04:04.827 metrics: explicitly disabled via build config 00:04:04.827 acl: explicitly disabled via build config 00:04:04.827 bbdev: explicitly disabled via build config 00:04:04.827 bitratestats: explicitly disabled via build config 00:04:04.827 bpf: explicitly disabled via build config 00:04:04.827 cfgfile: explicitly disabled via build config 00:04:04.827 distributor: explicitly disabled via build config 00:04:04.827 efd: explicitly disabled via build config 00:04:04.827 eventdev: explicitly disabled via build config 00:04:04.827 dispatcher: explicitly disabled via build config 00:04:04.827 gpudev: explicitly disabled via build config 00:04:04.827 gro: explicitly disabled via build config 00:04:04.827 gso: explicitly disabled via build config 00:04:04.827 ip_frag: explicitly disabled via build config 00:04:04.827 jobstats: explicitly disabled via build config 00:04:04.827 latencystats: explicitly disabled via build config 00:04:04.827 lpm: explicitly disabled via build config 00:04:04.827 member: explicitly disabled via build config 00:04:04.827 pcapng: explicitly disabled via build config 00:04:04.827 rawdev: explicitly disabled via build config 00:04:04.827 regexdev: explicitly disabled via build config 00:04:04.827 mldev: explicitly disabled via build config 00:04:04.827 rib: explicitly disabled via build config 00:04:04.827 sched: explicitly disabled via build config 00:04:04.827 stack: explicitly disabled via build config 00:04:04.827 ipsec: explicitly disabled via build config 00:04:04.827 pdcp: explicitly disabled via build config 00:04:04.827 fib: explicitly disabled via build config 00:04:04.827 port: explicitly disabled via build config 00:04:04.827 pdump: explicitly disabled via build config 00:04:04.827 table: explicitly disabled via build config 00:04:04.827 pipeline: explicitly disabled via build config 00:04:04.827 graph: explicitly disabled via build config 00:04:04.827 node: explicitly disabled via build config 00:04:04.827 00:04:04.827 drivers: 00:04:04.827 common/cpt: not in enabled drivers build config 00:04:04.827 common/dpaax: not in enabled drivers build config 00:04:04.827 common/iavf: not in enabled drivers build config 00:04:04.827 common/idpf: not in enabled drivers build config 00:04:04.827 common/ionic: not in enabled drivers build config 00:04:04.827 common/mvep: not in enabled drivers build config 00:04:04.827 common/octeontx: not in enabled drivers build config 00:04:04.827 bus/auxiliary: not in enabled drivers build config 00:04:04.827 bus/cdx: not in enabled drivers build config 00:04:04.827 bus/dpaa: not in enabled drivers build config 00:04:04.827 bus/fslmc: not in enabled drivers build config 00:04:04.827 bus/ifpga: not in enabled drivers build config 00:04:04.827 bus/platform: not in enabled drivers build config 00:04:04.827 bus/uacce: not in enabled drivers build config 00:04:04.827 bus/vmbus: not in enabled drivers build config 00:04:04.827 common/cnxk: not in enabled drivers build config 00:04:04.827 common/mlx5: not in enabled drivers build config 00:04:04.827 common/nfp: not in enabled drivers build config 00:04:04.827 common/nitrox: not in enabled drivers build config 00:04:04.827 common/qat: not in enabled drivers build config 00:04:04.827 common/sfc_efx: not in enabled drivers build config 00:04:04.827 mempool/bucket: not in enabled drivers build config 00:04:04.827 mempool/cnxk: not in enabled drivers build config 00:04:04.827 mempool/dpaa: not in enabled drivers build config 00:04:04.827 mempool/dpaa2: not in enabled drivers build config 00:04:04.827 mempool/octeontx: not in enabled drivers build config 00:04:04.827 mempool/stack: not in enabled drivers build config 00:04:04.827 dma/cnxk: not in enabled drivers build config 00:04:04.827 dma/dpaa: not in enabled drivers build config 00:04:04.827 dma/dpaa2: not in enabled drivers build config 00:04:04.827 dma/hisilicon: not in enabled drivers build config 00:04:04.827 dma/idxd: not in enabled drivers build config 00:04:04.827 dma/ioat: not in enabled drivers build config 00:04:04.827 dma/skeleton: not in enabled drivers build config 00:04:04.827 net/af_packet: not in enabled drivers build config 00:04:04.827 net/af_xdp: not in enabled drivers build config 00:04:04.827 net/ark: not in enabled drivers build config 00:04:04.827 net/atlantic: not in enabled drivers build config 00:04:04.827 net/avp: not in enabled drivers build config 00:04:04.827 net/axgbe: not in enabled drivers build config 00:04:04.827 net/bnx2x: not in enabled drivers build config 00:04:04.827 net/bnxt: not in enabled drivers build config 00:04:04.827 net/bonding: not in enabled drivers build config 00:04:04.827 net/cnxk: not in enabled drivers build config 00:04:04.827 net/cpfl: not in enabled drivers build config 00:04:04.827 net/cxgbe: not in enabled drivers build config 00:04:04.827 net/dpaa: not in enabled drivers build config 00:04:04.827 net/dpaa2: not in enabled drivers build config 00:04:04.827 net/e1000: not in enabled drivers build config 00:04:04.827 net/ena: not in enabled drivers build config 00:04:04.827 net/enetc: not in enabled drivers build config 00:04:04.827 net/enetfec: not in enabled drivers build config 00:04:04.827 net/enic: not in enabled drivers build config 00:04:04.827 net/failsafe: not in enabled drivers build config 00:04:04.827 net/fm10k: not in enabled drivers build config 00:04:04.827 net/gve: not in enabled drivers build config 00:04:04.827 net/hinic: not in enabled drivers build config 00:04:04.827 net/hns3: not in enabled drivers build config 00:04:04.827 net/i40e: not in enabled drivers build config 00:04:04.827 net/iavf: not in enabled drivers build config 00:04:04.827 net/ice: not in enabled drivers build config 00:04:04.827 net/idpf: not in enabled drivers build config 00:04:04.828 net/igc: not in enabled drivers build config 00:04:04.828 net/ionic: not in enabled drivers build config 00:04:04.828 net/ipn3ke: not in enabled drivers build config 00:04:04.828 net/ixgbe: not in enabled drivers build config 00:04:04.828 net/mana: not in enabled drivers build config 00:04:04.828 net/memif: not in enabled drivers build config 00:04:04.828 net/mlx4: not in enabled drivers build config 00:04:04.828 net/mlx5: not in enabled drivers build config 00:04:04.828 net/mvneta: not in enabled drivers build config 00:04:04.828 net/mvpp2: not in enabled drivers build config 00:04:04.828 net/netvsc: not in enabled drivers build config 00:04:04.828 net/nfb: not in enabled drivers build config 00:04:04.828 net/nfp: not in enabled drivers build config 00:04:04.828 net/ngbe: not in enabled drivers build config 00:04:04.828 net/null: not in enabled drivers build config 00:04:04.828 net/octeontx: not in enabled drivers build config 00:04:04.828 net/octeon_ep: not in enabled drivers build config 00:04:04.828 net/pcap: not in enabled drivers build config 00:04:04.828 net/pfe: not in enabled drivers build config 00:04:04.828 net/qede: not in enabled drivers build config 00:04:04.828 net/ring: not in enabled drivers build config 00:04:04.828 net/sfc: not in enabled drivers build config 00:04:04.828 net/softnic: not in enabled drivers build config 00:04:04.828 net/tap: not in enabled drivers build config 00:04:04.828 net/thunderx: not in enabled drivers build config 00:04:04.828 net/txgbe: not in enabled drivers build config 00:04:04.828 net/vdev_netvsc: not in enabled drivers build config 00:04:04.828 net/vhost: not in enabled drivers build config 00:04:04.828 net/virtio: not in enabled drivers build config 00:04:04.828 net/vmxnet3: not in enabled drivers build config 00:04:04.828 raw/*: missing internal dependency, "rawdev" 00:04:04.828 crypto/armv8: not in enabled drivers build config 00:04:04.828 crypto/bcmfs: not in enabled drivers build config 00:04:04.828 crypto/caam_jr: not in enabled drivers build config 00:04:04.828 crypto/ccp: not in enabled drivers build config 00:04:04.828 crypto/cnxk: not in enabled drivers build config 00:04:04.828 crypto/dpaa_sec: not in enabled drivers build config 00:04:04.828 crypto/dpaa2_sec: not in enabled drivers build config 00:04:04.828 crypto/ipsec_mb: not in enabled drivers build config 00:04:04.828 crypto/mlx5: not in enabled drivers build config 00:04:04.828 crypto/mvsam: not in enabled drivers build config 00:04:04.828 crypto/nitrox: not in enabled drivers build config 00:04:04.828 crypto/null: not in enabled drivers build config 00:04:04.828 crypto/octeontx: not in enabled drivers build config 00:04:04.828 crypto/openssl: not in enabled drivers build config 00:04:04.828 crypto/scheduler: not in enabled drivers build config 00:04:04.828 crypto/uadk: not in enabled drivers build config 00:04:04.828 crypto/virtio: not in enabled drivers build config 00:04:04.828 compress/isal: not in enabled drivers build config 00:04:04.828 compress/mlx5: not in enabled drivers build config 00:04:04.828 compress/nitrox: not in enabled drivers build config 00:04:04.828 compress/octeontx: not in enabled drivers build config 00:04:04.828 compress/zlib: not in enabled drivers build config 00:04:04.828 regex/*: missing internal dependency, "regexdev" 00:04:04.828 ml/*: missing internal dependency, "mldev" 00:04:04.828 vdpa/ifc: not in enabled drivers build config 00:04:04.828 vdpa/mlx5: not in enabled drivers build config 00:04:04.828 vdpa/nfp: not in enabled drivers build config 00:04:04.828 vdpa/sfc: not in enabled drivers build config 00:04:04.828 event/*: missing internal dependency, "eventdev" 00:04:04.828 baseband/*: missing internal dependency, "bbdev" 00:04:04.828 gpu/*: missing internal dependency, "gpudev" 00:04:04.828 00:04:04.828 00:04:04.828 Build targets in project: 85 00:04:04.828 00:04:04.828 DPDK 24.03.0 00:04:04.828 00:04:04.828 User defined options 00:04:04.828 buildtype : debug 00:04:04.828 default_library : shared 00:04:04.828 libdir : lib 00:04:04.828 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:04.828 b_sanitize : address 00:04:04.828 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:04.828 c_link_args : 00:04:04.828 cpu_instruction_set: native 00:04:04.828 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:04.828 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:04.828 enable_docs : false 00:04:04.828 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:04.828 enable_kmods : false 00:04:04.828 max_lcores : 128 00:04:04.828 tests : false 00:04:04.828 00:04:04.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:04.828 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:04.828 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:04.828 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:04.828 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:04.828 [4/268] Linking static target lib/librte_kvargs.a 00:04:04.828 [5/268] Linking static target lib/librte_log.a 00:04:04.828 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:05.087 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.087 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:05.345 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:05.345 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:05.603 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:05.603 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:05.603 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:05.862 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:05.862 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.862 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:05.862 [17/268] Linking static target lib/librte_telemetry.a 00:04:05.862 [18/268] Linking target lib/librte_log.so.24.1 00:04:06.121 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:06.121 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:06.378 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:06.637 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:06.895 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:06.895 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:06.895 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:07.153 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.153 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:07.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:07.153 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:07.153 [30/268] Linking target lib/librte_telemetry.so.24.1 00:04:07.411 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:07.411 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:07.670 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:07.670 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:07.927 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:08.186 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:08.459 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:08.717 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:08.717 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:08.717 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:08.717 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:08.717 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:08.717 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:08.976 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:08.976 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:09.542 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:09.542 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:09.800 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:09.800 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:10.058 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:10.316 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:10.316 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:10.576 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:10.576 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:10.576 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:10.834 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:10.834 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:11.400 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:11.401 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:11.659 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:11.659 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:11.659 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:11.917 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:11.917 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:11.917 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:12.174 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:12.433 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:12.999 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:13.262 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:13.262 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:13.262 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:13.262 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:13.520 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:13.520 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:13.520 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:13.520 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:13.778 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:13.778 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:14.036 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:14.294 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:14.553 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:14.553 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:14.811 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:14.811 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:15.069 [85/268] Linking static target lib/librte_eal.a 00:04:15.069 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:15.069 [87/268] Linking static target lib/librte_ring.a 00:04:15.341 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:15.341 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:15.341 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:15.341 [91/268] Linking static target lib/librte_rcu.a 00:04:15.649 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:15.649 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:15.649 [94/268] Linking static target lib/librte_mempool.a 00:04:15.929 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:15.929 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.187 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.187 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:16.446 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:16.446 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:16.704 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:17.270 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:17.270 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:17.529 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:17.529 [105/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.529 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:17.787 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:17.787 [108/268] Linking static target lib/librte_meter.a 00:04:17.787 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:17.787 [110/268] Linking static target lib/librte_net.a 00:04:18.045 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:18.045 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:18.045 [113/268] Linking static target lib/librte_mbuf.a 00:04:18.612 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.612 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.871 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:19.176 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:19.176 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:19.434 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:19.435 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.693 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:19.693 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:20.628 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:20.628 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:20.887 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:20.887 [126/268] Linking static target lib/librte_pci.a 00:04:21.146 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:21.146 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:21.146 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:21.404 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:21.404 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:21.404 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:21.404 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:21.663 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.663 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:21.663 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:21.663 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:21.663 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:21.921 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:21.921 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:21.921 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:21.921 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:21.921 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:21.921 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:22.517 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:22.785 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:22.785 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:22.785 [148/268] Linking static target lib/librte_cmdline.a 00:04:22.785 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:23.044 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:23.610 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:23.610 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:23.869 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:23.869 [154/268] Linking static target lib/librte_ethdev.a 00:04:24.128 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:24.128 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:24.128 [157/268] Linking static target lib/librte_timer.a 00:04:24.128 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:24.387 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:24.387 [160/268] Linking static target lib/librte_compressdev.a 00:04:24.649 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:24.907 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:24.907 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:24.907 [164/268] Linking static target lib/librte_hash.a 00:04:24.907 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.165 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.165 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:25.425 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:25.425 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:25.683 [170/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.683 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:25.942 [172/268] Linking static target lib/librte_dmadev.a 00:04:26.201 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:26.459 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:26.459 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:26.717 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.717 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:26.975 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:26.975 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.234 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:27.492 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:27.492 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:27.492 [183/268] Linking static target lib/librte_cryptodev.a 00:04:27.750 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:28.316 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:28.316 [186/268] Linking static target lib/librte_power.a 00:04:28.316 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:28.883 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:28.883 [189/268] Linking static target lib/librte_reorder.a 00:04:28.883 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:28.883 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:29.141 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:29.141 [193/268] Linking static target lib/librte_security.a 00:04:29.741 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.741 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:30.308 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.308 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.613 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:30.872 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.872 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:31.130 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:31.130 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:31.388 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:31.646 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:32.214 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:32.214 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:32.214 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:32.472 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:32.472 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:32.472 [210/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.472 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:32.730 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:32.731 [213/268] Linking target lib/librte_eal.so.24.1 00:04:32.731 [214/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:32.731 [215/268] Linking target lib/librte_meter.so.24.1 00:04:32.994 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:32.994 [217/268] Linking target lib/librte_pci.so.24.1 00:04:32.994 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:32.994 [219/268] Linking target lib/librte_ring.so.24.1 00:04:32.994 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:32.994 [221/268] Linking target lib/librte_timer.so.24.1 00:04:32.994 [222/268] Linking static target drivers/librte_bus_pci.a 00:04:32.994 [223/268] Linking target lib/librte_dmadev.so.24.1 00:04:32.994 [224/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:33.254 [225/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:33.254 [226/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:33.254 [227/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:33.254 [228/268] Linking static target drivers/librte_bus_vdev.a 00:04:33.254 [229/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:33.254 [230/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:33.254 [231/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:33.254 [232/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:33.254 [233/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:33.254 [234/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:33.511 [235/268] Linking target lib/librte_rcu.so.24.1 00:04:33.511 [236/268] Linking target lib/librte_mempool.so.24.1 00:04:33.769 [237/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:33.770 [238/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:33.770 [239/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:33.770 [240/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:33.770 [241/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:33.770 [242/268] Linking static target drivers/librte_mempool_ring.a 00:04:33.770 [243/268] Linking target lib/librte_mbuf.so.24.1 00:04:33.770 [244/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.770 [245/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:34.027 [246/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:34.027 [247/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.027 [248/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:34.027 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:34.027 [250/268] Linking target lib/librte_reorder.so.24.1 00:04:34.027 [251/268] Linking target lib/librte_net.so.24.1 00:04:34.027 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:04:34.027 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:34.285 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:34.285 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:34.285 [256/268] Linking target lib/librte_security.so.24.1 00:04:34.285 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:34.543 [258/268] Linking target lib/librte_hash.so.24.1 00:04:34.543 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:35.478 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.737 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:35.995 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:35.995 [263/268] Linking target lib/librte_power.so.24.1 00:04:36.254 [264/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:42.819 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:42.819 [266/268] Linking static target lib/librte_vhost.a 00:04:44.195 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.195 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:44.195 INFO: autodetecting backend as ninja 00:04:44.195 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:10.734 CC lib/ut/ut.o 00:05:10.734 CC lib/ut_mock/mock.o 00:05:10.734 CC lib/log/log_flags.o 00:05:10.734 CC lib/log/log_deprecated.o 00:05:10.734 CC lib/log/log.o 00:05:10.734 LIB libspdk_log.a 00:05:10.734 LIB libspdk_ut.a 00:05:10.734 SO libspdk_ut.so.2.0 00:05:10.734 LIB libspdk_ut_mock.a 00:05:10.734 SO libspdk_log.so.7.1 00:05:10.734 SO libspdk_ut_mock.so.6.0 00:05:10.734 SYMLINK libspdk_ut.so 00:05:10.734 SYMLINK libspdk_log.so 00:05:10.734 SYMLINK libspdk_ut_mock.so 00:05:10.734 CXX lib/trace_parser/trace.o 00:05:10.734 CC lib/ioat/ioat.o 00:05:10.734 CC lib/dma/dma.o 00:05:10.734 CC lib/util/base64.o 00:05:10.734 CC lib/util/bit_array.o 00:05:10.734 CC lib/util/cpuset.o 00:05:10.734 CC lib/util/crc16.o 00:05:10.734 CC lib/util/crc32.o 00:05:10.734 CC lib/util/crc32c.o 00:05:10.734 CC lib/vfio_user/host/vfio_user_pci.o 00:05:10.734 CC lib/vfio_user/host/vfio_user.o 00:05:10.734 CC lib/util/crc32_ieee.o 00:05:10.734 CC lib/util/crc64.o 00:05:10.734 LIB libspdk_dma.a 00:05:10.734 SO libspdk_dma.so.5.0 00:05:10.734 CC lib/util/dif.o 00:05:10.734 CC lib/util/fd.o 00:05:10.734 SYMLINK libspdk_dma.so 00:05:10.734 CC lib/util/fd_group.o 00:05:10.734 LIB libspdk_ioat.a 00:05:10.734 CC lib/util/file.o 00:05:10.734 LIB libspdk_vfio_user.a 00:05:10.734 SO libspdk_ioat.so.7.0 00:05:10.734 CC lib/util/hexlify.o 00:05:10.734 CC lib/util/iov.o 00:05:10.734 SO libspdk_vfio_user.so.5.0 00:05:10.734 CC lib/util/math.o 00:05:10.734 CC lib/util/net.o 00:05:10.734 SYMLINK libspdk_ioat.so 00:05:10.734 CC lib/util/pipe.o 00:05:10.734 SYMLINK libspdk_vfio_user.so 00:05:10.734 CC lib/util/strerror_tls.o 00:05:10.734 CC lib/util/string.o 00:05:10.734 CC lib/util/uuid.o 00:05:10.734 CC lib/util/xor.o 00:05:10.734 CC lib/util/zipf.o 00:05:10.734 CC lib/util/md5.o 00:05:10.734 LIB libspdk_util.a 00:05:10.734 SO libspdk_util.so.10.1 00:05:10.734 SYMLINK libspdk_util.so 00:05:10.734 LIB libspdk_trace_parser.a 00:05:10.734 SO libspdk_trace_parser.so.6.0 00:05:10.734 CC lib/idxd/idxd.o 00:05:10.734 CC lib/idxd/idxd_kernel.o 00:05:10.734 CC lib/idxd/idxd_user.o 00:05:10.734 CC lib/vmd/vmd.o 00:05:10.734 CC lib/vmd/led.o 00:05:10.734 CC lib/env_dpdk/env.o 00:05:10.734 CC lib/rdma_utils/rdma_utils.o 00:05:10.734 CC lib/json/json_parse.o 00:05:10.734 CC lib/conf/conf.o 00:05:10.993 SYMLINK libspdk_trace_parser.so 00:05:10.993 CC lib/json/json_util.o 00:05:10.993 CC lib/json/json_write.o 00:05:11.251 CC lib/env_dpdk/memory.o 00:05:11.251 LIB libspdk_rdma_utils.a 00:05:11.251 SO libspdk_rdma_utils.so.1.0 00:05:11.251 CC lib/env_dpdk/pci.o 00:05:11.251 CC lib/env_dpdk/init.o 00:05:11.251 LIB libspdk_conf.a 00:05:11.251 SYMLINK libspdk_rdma_utils.so 00:05:11.251 CC lib/env_dpdk/threads.o 00:05:11.251 SO libspdk_conf.so.6.0 00:05:11.509 LIB libspdk_json.a 00:05:11.509 SO libspdk_json.so.6.0 00:05:11.509 SYMLINK libspdk_conf.so 00:05:11.509 CC lib/env_dpdk/pci_ioat.o 00:05:11.509 SYMLINK libspdk_json.so 00:05:11.509 CC lib/env_dpdk/pci_virtio.o 00:05:11.509 CC lib/rdma_provider/common.o 00:05:11.509 CC lib/env_dpdk/pci_vmd.o 00:05:11.768 LIB libspdk_vmd.a 00:05:11.768 SO libspdk_vmd.so.6.0 00:05:11.768 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:11.768 CC lib/env_dpdk/pci_idxd.o 00:05:11.768 CC lib/env_dpdk/pci_event.o 00:05:11.768 SYMLINK libspdk_vmd.so 00:05:11.768 CC lib/env_dpdk/sigbus_handler.o 00:05:11.768 CC lib/env_dpdk/pci_dpdk.o 00:05:11.768 CC lib/jsonrpc/jsonrpc_server.o 00:05:12.027 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:12.027 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:12.027 LIB libspdk_rdma_provider.a 00:05:12.027 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:12.027 LIB libspdk_idxd.a 00:05:12.027 SO libspdk_rdma_provider.so.7.0 00:05:12.027 CC lib/jsonrpc/jsonrpc_client.o 00:05:12.027 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:12.285 SO libspdk_idxd.so.12.1 00:05:12.285 SYMLINK libspdk_rdma_provider.so 00:05:12.285 SYMLINK libspdk_idxd.so 00:05:12.285 LIB libspdk_jsonrpc.a 00:05:12.543 SO libspdk_jsonrpc.so.6.0 00:05:12.543 SYMLINK libspdk_jsonrpc.so 00:05:12.801 CC lib/rpc/rpc.o 00:05:13.059 LIB libspdk_rpc.a 00:05:13.317 SO libspdk_rpc.so.6.0 00:05:13.317 SYMLINK libspdk_rpc.so 00:05:13.574 CC lib/trace/trace_flags.o 00:05:13.574 CC lib/trace/trace.o 00:05:13.574 CC lib/trace/trace_rpc.o 00:05:13.574 CC lib/notify/notify.o 00:05:13.574 CC lib/notify/notify_rpc.o 00:05:13.574 CC lib/keyring/keyring.o 00:05:13.574 CC lib/keyring/keyring_rpc.o 00:05:13.574 LIB libspdk_env_dpdk.a 00:05:13.574 SO libspdk_env_dpdk.so.15.1 00:05:13.833 LIB libspdk_notify.a 00:05:13.833 SO libspdk_notify.so.6.0 00:05:13.833 SYMLINK libspdk_env_dpdk.so 00:05:13.833 SYMLINK libspdk_notify.so 00:05:13.833 LIB libspdk_keyring.a 00:05:13.833 SO libspdk_keyring.so.2.0 00:05:14.091 LIB libspdk_trace.a 00:05:14.091 SYMLINK libspdk_keyring.so 00:05:14.091 SO libspdk_trace.so.11.0 00:05:14.091 SYMLINK libspdk_trace.so 00:05:14.349 CC lib/thread/iobuf.o 00:05:14.349 CC lib/thread/thread.o 00:05:14.349 CC lib/sock/sock.o 00:05:14.349 CC lib/sock/sock_rpc.o 00:05:15.284 LIB libspdk_sock.a 00:05:15.284 SO libspdk_sock.so.10.0 00:05:15.284 SYMLINK libspdk_sock.so 00:05:15.543 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:15.543 CC lib/nvme/nvme_ctrlr.o 00:05:15.543 CC lib/nvme/nvme_ns_cmd.o 00:05:15.543 CC lib/nvme/nvme_fabric.o 00:05:15.543 CC lib/nvme/nvme_ns.o 00:05:15.543 CC lib/nvme/nvme_pcie_common.o 00:05:15.543 CC lib/nvme/nvme_pcie.o 00:05:15.543 CC lib/nvme/nvme_qpair.o 00:05:15.543 CC lib/nvme/nvme.o 00:05:16.917 CC lib/nvme/nvme_quirks.o 00:05:17.175 CC lib/nvme/nvme_transport.o 00:05:17.175 CC lib/nvme/nvme_discovery.o 00:05:17.175 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:17.175 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:17.433 CC lib/nvme/nvme_tcp.o 00:05:17.433 CC lib/nvme/nvme_opal.o 00:05:17.692 CC lib/nvme/nvme_io_msg.o 00:05:17.980 CC lib/nvme/nvme_poll_group.o 00:05:17.980 LIB libspdk_thread.a 00:05:17.980 SO libspdk_thread.so.11.0 00:05:17.980 CC lib/nvme/nvme_zns.o 00:05:18.238 SYMLINK libspdk_thread.so 00:05:18.238 CC lib/nvme/nvme_stubs.o 00:05:18.238 CC lib/nvme/nvme_auth.o 00:05:18.496 CC lib/accel/accel.o 00:05:18.755 CC lib/accel/accel_rpc.o 00:05:19.013 CC lib/blob/blobstore.o 00:05:19.013 CC lib/init/json_config.o 00:05:19.013 CC lib/virtio/virtio.o 00:05:19.013 CC lib/virtio/virtio_vhost_user.o 00:05:19.271 CC lib/virtio/virtio_vfio_user.o 00:05:19.271 CC lib/virtio/virtio_pci.o 00:05:19.529 CC lib/init/subsystem.o 00:05:19.787 CC lib/init/subsystem_rpc.o 00:05:19.787 CC lib/blob/request.o 00:05:19.787 CC lib/blob/zeroes.o 00:05:19.787 CC lib/fsdev/fsdev.o 00:05:19.787 LIB libspdk_virtio.a 00:05:19.787 CC lib/blob/blob_bs_dev.o 00:05:20.046 SO libspdk_virtio.so.7.0 00:05:20.046 CC lib/init/rpc.o 00:05:20.046 CC lib/nvme/nvme_cuse.o 00:05:20.046 CC lib/fsdev/fsdev_io.o 00:05:20.046 SYMLINK libspdk_virtio.so 00:05:20.046 CC lib/fsdev/fsdev_rpc.o 00:05:20.304 CC lib/nvme/nvme_rdma.o 00:05:20.304 CC lib/accel/accel_sw.o 00:05:20.304 LIB libspdk_init.a 00:05:20.304 SO libspdk_init.so.6.0 00:05:20.562 SYMLINK libspdk_init.so 00:05:20.821 CC lib/event/app.o 00:05:20.821 CC lib/event/log_rpc.o 00:05:20.821 CC lib/event/reactor.o 00:05:20.821 CC lib/event/app_rpc.o 00:05:21.079 CC lib/event/scheduler_static.o 00:05:21.079 LIB libspdk_accel.a 00:05:21.079 SO libspdk_accel.so.16.0 00:05:21.079 SYMLINK libspdk_accel.so 00:05:21.079 LIB libspdk_fsdev.a 00:05:21.337 SO libspdk_fsdev.so.2.0 00:05:21.337 SYMLINK libspdk_fsdev.so 00:05:21.337 CC lib/bdev/bdev.o 00:05:21.337 CC lib/bdev/bdev_rpc.o 00:05:21.337 CC lib/bdev/bdev_zone.o 00:05:21.337 CC lib/bdev/part.o 00:05:21.337 CC lib/bdev/scsi_nvme.o 00:05:21.595 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:21.595 LIB libspdk_event.a 00:05:21.595 SO libspdk_event.so.14.0 00:05:21.853 SYMLINK libspdk_event.so 00:05:23.240 LIB libspdk_fuse_dispatcher.a 00:05:23.240 SO libspdk_fuse_dispatcher.so.1.0 00:05:23.240 SYMLINK libspdk_fuse_dispatcher.so 00:05:23.240 LIB libspdk_nvme.a 00:05:23.805 SO libspdk_nvme.so.15.0 00:05:24.063 SYMLINK libspdk_nvme.so 00:05:24.997 LIB libspdk_blob.a 00:05:24.997 SO libspdk_blob.so.12.0 00:05:24.997 SYMLINK libspdk_blob.so 00:05:25.255 CC lib/lvol/lvol.o 00:05:25.255 CC lib/blobfs/blobfs.o 00:05:25.255 CC lib/blobfs/tree.o 00:05:26.189 LIB libspdk_bdev.a 00:05:26.189 SO libspdk_bdev.so.17.0 00:05:26.189 SYMLINK libspdk_bdev.so 00:05:26.447 CC lib/ftl/ftl_core.o 00:05:26.447 CC lib/ftl/ftl_init.o 00:05:26.447 CC lib/ftl/ftl_layout.o 00:05:26.447 CC lib/ftl/ftl_debug.o 00:05:26.447 CC lib/scsi/dev.o 00:05:26.447 CC lib/nbd/nbd.o 00:05:26.447 CC lib/nvmf/ctrlr.o 00:05:26.447 CC lib/ublk/ublk.o 00:05:26.447 LIB libspdk_blobfs.a 00:05:26.447 SO libspdk_blobfs.so.11.0 00:05:26.447 SYMLINK libspdk_blobfs.so 00:05:26.447 CC lib/ublk/ublk_rpc.o 00:05:26.447 LIB libspdk_lvol.a 00:05:26.705 CC lib/ftl/ftl_io.o 00:05:26.705 SO libspdk_lvol.so.11.0 00:05:26.705 CC lib/scsi/lun.o 00:05:26.705 CC lib/scsi/port.o 00:05:26.705 SYMLINK libspdk_lvol.so 00:05:26.705 CC lib/nvmf/ctrlr_discovery.o 00:05:26.705 CC lib/nvmf/ctrlr_bdev.o 00:05:26.705 CC lib/nbd/nbd_rpc.o 00:05:26.963 CC lib/nvmf/subsystem.o 00:05:26.963 CC lib/nvmf/nvmf.o 00:05:26.963 CC lib/nvmf/nvmf_rpc.o 00:05:26.963 CC lib/ftl/ftl_sb.o 00:05:26.963 LIB libspdk_nbd.a 00:05:26.963 SO libspdk_nbd.so.7.0 00:05:26.963 CC lib/scsi/scsi.o 00:05:27.221 SYMLINK libspdk_nbd.so 00:05:27.221 CC lib/scsi/scsi_bdev.o 00:05:27.221 CC lib/ftl/ftl_l2p.o 00:05:27.221 CC lib/scsi/scsi_pr.o 00:05:27.221 LIB libspdk_ublk.a 00:05:27.221 SO libspdk_ublk.so.3.0 00:05:27.221 CC lib/nvmf/transport.o 00:05:27.480 SYMLINK libspdk_ublk.so 00:05:27.480 CC lib/ftl/ftl_l2p_flat.o 00:05:27.480 CC lib/ftl/ftl_nv_cache.o 00:05:27.737 CC lib/ftl/ftl_band.o 00:05:27.737 CC lib/ftl/ftl_band_ops.o 00:05:27.737 CC lib/scsi/scsi_rpc.o 00:05:27.737 CC lib/nvmf/tcp.o 00:05:27.996 CC lib/scsi/task.o 00:05:28.254 CC lib/nvmf/stubs.o 00:05:28.254 LIB libspdk_scsi.a 00:05:28.254 CC lib/ftl/ftl_writer.o 00:05:28.254 CC lib/ftl/ftl_rq.o 00:05:28.254 CC lib/nvmf/mdns_server.o 00:05:28.254 SO libspdk_scsi.so.9.0 00:05:28.512 CC lib/nvmf/rdma.o 00:05:28.512 SYMLINK libspdk_scsi.so 00:05:28.512 CC lib/ftl/ftl_reloc.o 00:05:28.512 CC lib/ftl/ftl_l2p_cache.o 00:05:28.512 CC lib/nvmf/auth.o 00:05:29.078 CC lib/ftl/ftl_p2l.o 00:05:29.078 CC lib/iscsi/conn.o 00:05:29.078 CC lib/ftl/ftl_p2l_log.o 00:05:29.078 CC lib/ftl/mngt/ftl_mngt.o 00:05:29.078 CC lib/vhost/vhost.o 00:05:29.078 CC lib/vhost/vhost_rpc.o 00:05:29.337 CC lib/vhost/vhost_scsi.o 00:05:29.337 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:29.595 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:29.595 CC lib/iscsi/init_grp.o 00:05:29.853 CC lib/iscsi/iscsi.o 00:05:29.853 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:29.853 CC lib/iscsi/param.o 00:05:29.853 CC lib/vhost/vhost_blk.o 00:05:29.853 CC lib/iscsi/portal_grp.o 00:05:30.112 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:30.112 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:30.112 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:30.369 CC lib/iscsi/tgt_node.o 00:05:30.369 CC lib/iscsi/iscsi_subsystem.o 00:05:30.369 CC lib/iscsi/iscsi_rpc.o 00:05:30.369 CC lib/vhost/rte_vhost_user.o 00:05:30.627 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:30.627 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:30.886 CC lib/iscsi/task.o 00:05:30.886 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:30.886 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:31.521 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:31.521 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:31.521 CC lib/ftl/utils/ftl_conf.o 00:05:31.521 CC lib/ftl/utils/ftl_md.o 00:05:31.521 CC lib/ftl/utils/ftl_mempool.o 00:05:31.521 CC lib/ftl/utils/ftl_bitmap.o 00:05:31.779 CC lib/ftl/utils/ftl_property.o 00:05:31.779 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:31.779 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:31.779 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:32.038 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:32.038 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:32.038 LIB libspdk_vhost.a 00:05:32.038 SO libspdk_vhost.so.8.0 00:05:32.297 LIB libspdk_nvmf.a 00:05:32.297 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:32.297 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:32.297 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:32.297 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:32.297 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:32.297 SYMLINK libspdk_vhost.so 00:05:32.297 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:32.555 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:32.556 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:32.556 SO libspdk_nvmf.so.20.0 00:05:32.556 CC lib/ftl/base/ftl_base_dev.o 00:05:32.556 CC lib/ftl/base/ftl_base_bdev.o 00:05:32.556 CC lib/ftl/ftl_trace.o 00:05:32.814 LIB libspdk_iscsi.a 00:05:32.814 SYMLINK libspdk_nvmf.so 00:05:32.814 SO libspdk_iscsi.so.8.0 00:05:33.072 SYMLINK libspdk_iscsi.so 00:05:33.072 LIB libspdk_ftl.a 00:05:33.638 SO libspdk_ftl.so.9.0 00:05:33.898 SYMLINK libspdk_ftl.so 00:05:34.157 CC module/env_dpdk/env_dpdk_rpc.o 00:05:34.416 CC module/sock/posix/posix.o 00:05:34.416 CC module/keyring/file/keyring.o 00:05:34.416 CC module/accel/iaa/accel_iaa.o 00:05:34.416 CC module/accel/ioat/accel_ioat.o 00:05:34.416 CC module/accel/error/accel_error.o 00:05:34.416 CC module/blob/bdev/blob_bdev.o 00:05:34.416 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:34.416 CC module/accel/dsa/accel_dsa.o 00:05:34.416 CC module/fsdev/aio/fsdev_aio.o 00:05:34.416 LIB libspdk_env_dpdk_rpc.a 00:05:34.674 SO libspdk_env_dpdk_rpc.so.6.0 00:05:34.674 CC module/keyring/file/keyring_rpc.o 00:05:34.674 SYMLINK libspdk_env_dpdk_rpc.so 00:05:34.674 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:34.674 CC module/accel/ioat/accel_ioat_rpc.o 00:05:34.674 LIB libspdk_scheduler_dynamic.a 00:05:34.933 LIB libspdk_blob_bdev.a 00:05:34.933 CC module/accel/error/accel_error_rpc.o 00:05:34.933 CC module/accel/iaa/accel_iaa_rpc.o 00:05:34.933 SO libspdk_scheduler_dynamic.so.4.0 00:05:34.933 SO libspdk_blob_bdev.so.12.0 00:05:34.933 SYMLINK libspdk_scheduler_dynamic.so 00:05:34.933 LIB libspdk_keyring_file.a 00:05:34.933 CC module/accel/dsa/accel_dsa_rpc.o 00:05:34.933 SO libspdk_keyring_file.so.2.0 00:05:34.933 SYMLINK libspdk_blob_bdev.so 00:05:34.933 LIB libspdk_accel_ioat.a 00:05:34.933 LIB libspdk_accel_iaa.a 00:05:35.220 SO libspdk_accel_ioat.so.6.0 00:05:35.220 LIB libspdk_accel_error.a 00:05:35.220 SYMLINK libspdk_keyring_file.so 00:05:35.220 SO libspdk_accel_iaa.so.3.0 00:05:35.221 SO libspdk_accel_error.so.2.0 00:05:35.221 SYMLINK libspdk_accel_ioat.so 00:05:35.221 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:35.221 SYMLINK libspdk_accel_iaa.so 00:05:35.221 CC module/fsdev/aio/linux_aio_mgr.o 00:05:35.221 SYMLINK libspdk_accel_error.so 00:05:35.221 LIB libspdk_accel_dsa.a 00:05:35.221 SO libspdk_accel_dsa.so.5.0 00:05:35.479 CC module/keyring/linux/keyring.o 00:05:35.479 CC module/bdev/delay/vbdev_delay.o 00:05:35.479 CC module/blobfs/bdev/blobfs_bdev.o 00:05:35.479 LIB libspdk_scheduler_dpdk_governor.a 00:05:35.479 SYMLINK libspdk_accel_dsa.so 00:05:35.479 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:35.479 CC module/keyring/linux/keyring_rpc.o 00:05:35.479 CC module/bdev/error/vbdev_error.o 00:05:35.479 LIB libspdk_fsdev_aio.a 00:05:35.479 CC module/scheduler/gscheduler/gscheduler.o 00:05:35.739 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:35.739 CC module/bdev/error/vbdev_error_rpc.o 00:05:35.739 SO libspdk_fsdev_aio.so.1.0 00:05:35.739 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:35.739 LIB libspdk_sock_posix.a 00:05:35.739 SYMLINK libspdk_fsdev_aio.so 00:05:35.739 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:35.739 CC module/bdev/gpt/gpt.o 00:05:35.739 LIB libspdk_keyring_linux.a 00:05:35.739 SO libspdk_sock_posix.so.6.0 00:05:35.739 LIB libspdk_scheduler_gscheduler.a 00:05:35.739 SO libspdk_keyring_linux.so.1.0 00:05:35.739 SO libspdk_scheduler_gscheduler.so.4.0 00:05:35.998 SYMLINK libspdk_sock_posix.so 00:05:35.998 SYMLINK libspdk_scheduler_gscheduler.so 00:05:35.998 CC module/bdev/gpt/vbdev_gpt.o 00:05:35.998 SYMLINK libspdk_keyring_linux.so 00:05:35.998 CC module/bdev/lvol/vbdev_lvol.o 00:05:35.998 LIB libspdk_blobfs_bdev.a 00:05:35.998 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:35.998 SO libspdk_blobfs_bdev.so.6.0 00:05:35.998 LIB libspdk_bdev_delay.a 00:05:35.998 SO libspdk_bdev_delay.so.6.0 00:05:36.256 LIB libspdk_bdev_error.a 00:05:36.256 SYMLINK libspdk_blobfs_bdev.so 00:05:36.256 SO libspdk_bdev_error.so.6.0 00:05:36.256 CC module/bdev/malloc/bdev_malloc.o 00:05:36.256 SYMLINK libspdk_bdev_delay.so 00:05:36.256 CC module/bdev/null/bdev_null.o 00:05:36.256 CC module/bdev/nvme/bdev_nvme.o 00:05:36.256 SYMLINK libspdk_bdev_error.so 00:05:36.256 CC module/bdev/null/bdev_null_rpc.o 00:05:36.256 CC module/bdev/passthru/vbdev_passthru.o 00:05:36.515 CC module/bdev/raid/bdev_raid.o 00:05:36.515 CC module/bdev/split/vbdev_split.o 00:05:36.515 LIB libspdk_bdev_gpt.a 00:05:36.515 CC module/bdev/raid/bdev_raid_rpc.o 00:05:36.515 SO libspdk_bdev_gpt.so.6.0 00:05:36.774 LIB libspdk_bdev_null.a 00:05:36.775 SO libspdk_bdev_null.so.6.0 00:05:36.775 SYMLINK libspdk_bdev_gpt.so 00:05:36.775 CC module/bdev/raid/bdev_raid_sb.o 00:05:36.775 CC module/bdev/raid/raid0.o 00:05:36.775 SYMLINK libspdk_bdev_null.so 00:05:36.775 CC module/bdev/raid/raid1.o 00:05:37.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:37.033 CC module/bdev/split/vbdev_split_rpc.o 00:05:37.033 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:37.033 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:37.295 LIB libspdk_bdev_split.a 00:05:37.295 LIB libspdk_bdev_lvol.a 00:05:37.295 SO libspdk_bdev_lvol.so.6.0 00:05:37.295 SO libspdk_bdev_split.so.6.0 00:05:37.295 CC module/bdev/nvme/nvme_rpc.o 00:05:37.295 LIB libspdk_bdev_malloc.a 00:05:37.295 SO libspdk_bdev_malloc.so.6.0 00:05:37.295 LIB libspdk_bdev_passthru.a 00:05:37.295 SYMLINK libspdk_bdev_lvol.so 00:05:37.295 CC module/bdev/nvme/bdev_mdns_client.o 00:05:37.295 CC module/bdev/raid/concat.o 00:05:37.295 SYMLINK libspdk_bdev_split.so 00:05:37.295 SO libspdk_bdev_passthru.so.6.0 00:05:37.554 SYMLINK libspdk_bdev_malloc.so 00:05:37.554 SYMLINK libspdk_bdev_passthru.so 00:05:37.554 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:37.554 CC module/bdev/nvme/vbdev_opal.o 00:05:37.554 CC module/bdev/xnvme/bdev_xnvme.o 00:05:37.812 CC module/bdev/aio/bdev_aio.o 00:05:37.812 CC module/bdev/aio/bdev_aio_rpc.o 00:05:37.812 CC module/bdev/ftl/bdev_ftl.o 00:05:37.812 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:38.070 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:38.329 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:38.329 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:38.329 LIB libspdk_bdev_ftl.a 00:05:38.329 LIB libspdk_bdev_xnvme.a 00:05:38.329 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:38.329 SO libspdk_bdev_xnvme.so.3.0 00:05:38.329 SO libspdk_bdev_ftl.so.6.0 00:05:38.329 LIB libspdk_bdev_aio.a 00:05:38.329 SO libspdk_bdev_aio.so.6.0 00:05:38.329 SYMLINK libspdk_bdev_ftl.so 00:05:38.329 SYMLINK libspdk_bdev_xnvme.so 00:05:38.587 CC module/bdev/iscsi/bdev_iscsi.o 00:05:38.587 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:38.587 SYMLINK libspdk_bdev_aio.so 00:05:38.587 LIB libspdk_bdev_raid.a 00:05:38.587 LIB libspdk_bdev_zone_block.a 00:05:38.587 SO libspdk_bdev_zone_block.so.6.0 00:05:38.587 SO libspdk_bdev_raid.so.6.0 00:05:38.587 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:38.587 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:38.587 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:38.587 SYMLINK libspdk_bdev_zone_block.so 00:05:38.587 SYMLINK libspdk_bdev_raid.so 00:05:38.846 LIB libspdk_bdev_iscsi.a 00:05:39.104 SO libspdk_bdev_iscsi.so.6.0 00:05:39.104 SYMLINK libspdk_bdev_iscsi.so 00:05:39.362 LIB libspdk_bdev_virtio.a 00:05:39.362 SO libspdk_bdev_virtio.so.6.0 00:05:39.620 SYMLINK libspdk_bdev_virtio.so 00:05:40.993 LIB libspdk_bdev_nvme.a 00:05:40.993 SO libspdk_bdev_nvme.so.7.1 00:05:40.993 SYMLINK libspdk_bdev_nvme.so 00:05:41.559 CC module/event/subsystems/iobuf/iobuf.o 00:05:41.559 CC module/event/subsystems/vmd/vmd.o 00:05:41.559 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:41.559 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:41.559 CC module/event/subsystems/sock/sock.o 00:05:41.559 CC module/event/subsystems/keyring/keyring.o 00:05:41.559 CC module/event/subsystems/scheduler/scheduler.o 00:05:41.559 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:41.559 CC module/event/subsystems/fsdev/fsdev.o 00:05:41.817 LIB libspdk_event_keyring.a 00:05:41.817 SO libspdk_event_keyring.so.1.0 00:05:41.817 SYMLINK libspdk_event_keyring.so 00:05:41.817 LIB libspdk_event_sock.a 00:05:41.817 LIB libspdk_event_iobuf.a 00:05:41.817 LIB libspdk_event_vhost_blk.a 00:05:41.817 LIB libspdk_event_vmd.a 00:05:41.817 LIB libspdk_event_scheduler.a 00:05:41.817 SO libspdk_event_sock.so.5.0 00:05:41.817 LIB libspdk_event_fsdev.a 00:05:41.817 SO libspdk_event_iobuf.so.3.0 00:05:41.817 SO libspdk_event_vhost_blk.so.3.0 00:05:41.817 SO libspdk_event_vmd.so.6.0 00:05:41.817 SO libspdk_event_scheduler.so.4.0 00:05:41.817 SO libspdk_event_fsdev.so.1.0 00:05:41.817 SYMLINK libspdk_event_sock.so 00:05:41.817 SYMLINK libspdk_event_iobuf.so 00:05:41.817 SYMLINK libspdk_event_vhost_blk.so 00:05:42.075 SYMLINK libspdk_event_scheduler.so 00:05:42.075 SYMLINK libspdk_event_vmd.so 00:05:42.075 SYMLINK libspdk_event_fsdev.so 00:05:42.075 CC module/event/subsystems/accel/accel.o 00:05:42.334 LIB libspdk_event_accel.a 00:05:42.592 SO libspdk_event_accel.so.6.0 00:05:42.592 SYMLINK libspdk_event_accel.so 00:05:42.850 CC module/event/subsystems/bdev/bdev.o 00:05:43.109 LIB libspdk_event_bdev.a 00:05:43.109 SO libspdk_event_bdev.so.6.0 00:05:43.109 SYMLINK libspdk_event_bdev.so 00:05:43.368 CC module/event/subsystems/scsi/scsi.o 00:05:43.368 CC module/event/subsystems/nbd/nbd.o 00:05:43.368 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:43.368 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:43.368 CC module/event/subsystems/ublk/ublk.o 00:05:43.626 LIB libspdk_event_nbd.a 00:05:43.626 LIB libspdk_event_ublk.a 00:05:43.626 SO libspdk_event_nbd.so.6.0 00:05:43.626 SO libspdk_event_ublk.so.3.0 00:05:43.626 LIB libspdk_event_scsi.a 00:05:43.626 SO libspdk_event_scsi.so.6.0 00:05:43.626 SYMLINK libspdk_event_ublk.so 00:05:43.626 SYMLINK libspdk_event_nbd.so 00:05:43.626 LIB libspdk_event_nvmf.a 00:05:43.885 SYMLINK libspdk_event_scsi.so 00:05:43.885 SO libspdk_event_nvmf.so.6.0 00:05:43.885 SYMLINK libspdk_event_nvmf.so 00:05:43.885 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:43.885 CC module/event/subsystems/iscsi/iscsi.o 00:05:44.143 LIB libspdk_event_vhost_scsi.a 00:05:44.143 LIB libspdk_event_iscsi.a 00:05:44.143 SO libspdk_event_vhost_scsi.so.3.0 00:05:44.143 SO libspdk_event_iscsi.so.6.0 00:05:44.401 SYMLINK libspdk_event_vhost_scsi.so 00:05:44.401 SYMLINK libspdk_event_iscsi.so 00:05:44.401 SO libspdk.so.6.0 00:05:44.401 SYMLINK libspdk.so 00:05:44.659 CC app/trace_record/trace_record.o 00:05:44.659 CC app/spdk_lspci/spdk_lspci.o 00:05:44.659 CXX app/trace/trace.o 00:05:44.659 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:44.659 CC app/iscsi_tgt/iscsi_tgt.o 00:05:44.659 CC app/nvmf_tgt/nvmf_main.o 00:05:44.659 CC test/thread/poller_perf/poller_perf.o 00:05:44.659 CC examples/ioat/perf/perf.o 00:05:44.659 CC examples/util/zipf/zipf.o 00:05:44.920 CC app/spdk_tgt/spdk_tgt.o 00:05:44.920 LINK spdk_lspci 00:05:44.920 LINK interrupt_tgt 00:05:44.920 LINK zipf 00:05:44.920 LINK nvmf_tgt 00:05:44.920 LINK poller_perf 00:05:44.920 LINK iscsi_tgt 00:05:44.920 LINK spdk_trace_record 00:05:44.920 LINK ioat_perf 00:05:45.182 CC app/spdk_nvme_perf/perf.o 00:05:45.182 LINK spdk_tgt 00:05:45.182 LINK spdk_trace 00:05:45.182 CC app/spdk_nvme_identify/identify.o 00:05:45.182 CC examples/ioat/verify/verify.o 00:05:45.440 CC app/spdk_nvme_discover/discovery_aer.o 00:05:45.440 CC app/spdk_top/spdk_top.o 00:05:45.440 CC test/app/bdev_svc/bdev_svc.o 00:05:45.440 CC test/dma/test_dma/test_dma.o 00:05:45.440 CC examples/thread/thread/thread_ex.o 00:05:45.440 TEST_HEADER include/spdk/accel.h 00:05:45.440 TEST_HEADER include/spdk/accel_module.h 00:05:45.440 TEST_HEADER include/spdk/assert.h 00:05:45.440 TEST_HEADER include/spdk/barrier.h 00:05:45.440 TEST_HEADER include/spdk/base64.h 00:05:45.440 TEST_HEADER include/spdk/bdev.h 00:05:45.440 TEST_HEADER include/spdk/bdev_module.h 00:05:45.440 TEST_HEADER include/spdk/bdev_zone.h 00:05:45.440 TEST_HEADER include/spdk/bit_array.h 00:05:45.440 TEST_HEADER include/spdk/bit_pool.h 00:05:45.440 TEST_HEADER include/spdk/blob_bdev.h 00:05:45.440 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:45.440 TEST_HEADER include/spdk/blobfs.h 00:05:45.440 TEST_HEADER include/spdk/blob.h 00:05:45.440 TEST_HEADER include/spdk/conf.h 00:05:45.440 TEST_HEADER include/spdk/config.h 00:05:45.440 TEST_HEADER include/spdk/cpuset.h 00:05:45.440 TEST_HEADER include/spdk/crc16.h 00:05:45.440 TEST_HEADER include/spdk/crc32.h 00:05:45.440 TEST_HEADER include/spdk/crc64.h 00:05:45.440 TEST_HEADER include/spdk/dif.h 00:05:45.440 TEST_HEADER include/spdk/dma.h 00:05:45.440 TEST_HEADER include/spdk/endian.h 00:05:45.440 TEST_HEADER include/spdk/env_dpdk.h 00:05:45.440 TEST_HEADER include/spdk/env.h 00:05:45.440 TEST_HEADER include/spdk/event.h 00:05:45.440 TEST_HEADER include/spdk/fd_group.h 00:05:45.440 TEST_HEADER include/spdk/fd.h 00:05:45.440 TEST_HEADER include/spdk/file.h 00:05:45.440 TEST_HEADER include/spdk/fsdev.h 00:05:45.440 TEST_HEADER include/spdk/fsdev_module.h 00:05:45.440 TEST_HEADER include/spdk/ftl.h 00:05:45.440 TEST_HEADER include/spdk/gpt_spec.h 00:05:45.440 TEST_HEADER include/spdk/hexlify.h 00:05:45.440 TEST_HEADER include/spdk/histogram_data.h 00:05:45.440 TEST_HEADER include/spdk/idxd.h 00:05:45.440 TEST_HEADER include/spdk/idxd_spec.h 00:05:45.440 TEST_HEADER include/spdk/init.h 00:05:45.440 TEST_HEADER include/spdk/ioat.h 00:05:45.440 TEST_HEADER include/spdk/ioat_spec.h 00:05:45.699 LINK spdk_nvme_discover 00:05:45.699 TEST_HEADER include/spdk/iscsi_spec.h 00:05:45.699 TEST_HEADER include/spdk/json.h 00:05:45.699 TEST_HEADER include/spdk/jsonrpc.h 00:05:45.699 TEST_HEADER include/spdk/keyring.h 00:05:45.699 TEST_HEADER include/spdk/keyring_module.h 00:05:45.699 TEST_HEADER include/spdk/likely.h 00:05:45.699 TEST_HEADER include/spdk/log.h 00:05:45.699 TEST_HEADER include/spdk/lvol.h 00:05:45.699 TEST_HEADER include/spdk/md5.h 00:05:45.699 TEST_HEADER include/spdk/memory.h 00:05:45.699 TEST_HEADER include/spdk/mmio.h 00:05:45.699 TEST_HEADER include/spdk/nbd.h 00:05:45.699 TEST_HEADER include/spdk/net.h 00:05:45.699 TEST_HEADER include/spdk/notify.h 00:05:45.699 TEST_HEADER include/spdk/nvme.h 00:05:45.699 TEST_HEADER include/spdk/nvme_intel.h 00:05:45.699 LINK bdev_svc 00:05:45.699 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:45.699 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:45.699 TEST_HEADER include/spdk/nvme_spec.h 00:05:45.699 TEST_HEADER include/spdk/nvme_zns.h 00:05:45.699 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:45.699 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:45.699 LINK verify 00:05:45.699 TEST_HEADER include/spdk/nvmf.h 00:05:45.699 TEST_HEADER include/spdk/nvmf_spec.h 00:05:45.699 TEST_HEADER include/spdk/nvmf_transport.h 00:05:45.699 TEST_HEADER include/spdk/opal.h 00:05:45.699 TEST_HEADER include/spdk/opal_spec.h 00:05:45.699 TEST_HEADER include/spdk/pci_ids.h 00:05:45.699 TEST_HEADER include/spdk/pipe.h 00:05:45.699 TEST_HEADER include/spdk/queue.h 00:05:45.699 TEST_HEADER include/spdk/reduce.h 00:05:45.699 TEST_HEADER include/spdk/rpc.h 00:05:45.699 TEST_HEADER include/spdk/scheduler.h 00:05:45.699 TEST_HEADER include/spdk/scsi.h 00:05:45.699 TEST_HEADER include/spdk/scsi_spec.h 00:05:45.699 TEST_HEADER include/spdk/sock.h 00:05:45.699 TEST_HEADER include/spdk/stdinc.h 00:05:45.699 TEST_HEADER include/spdk/string.h 00:05:45.699 TEST_HEADER include/spdk/thread.h 00:05:45.699 CC test/env/mem_callbacks/mem_callbacks.o 00:05:45.699 TEST_HEADER include/spdk/trace.h 00:05:45.699 TEST_HEADER include/spdk/trace_parser.h 00:05:45.699 TEST_HEADER include/spdk/tree.h 00:05:45.699 TEST_HEADER include/spdk/ublk.h 00:05:45.700 TEST_HEADER include/spdk/util.h 00:05:45.700 TEST_HEADER include/spdk/uuid.h 00:05:45.700 TEST_HEADER include/spdk/version.h 00:05:45.700 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:45.700 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:45.700 TEST_HEADER include/spdk/vhost.h 00:05:45.700 TEST_HEADER include/spdk/vmd.h 00:05:45.700 TEST_HEADER include/spdk/xor.h 00:05:45.700 TEST_HEADER include/spdk/zipf.h 00:05:45.700 LINK thread 00:05:45.700 CXX test/cpp_headers/accel.o 00:05:45.959 CC test/app/histogram_perf/histogram_perf.o 00:05:45.959 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:45.959 CXX test/cpp_headers/accel_module.o 00:05:45.959 LINK test_dma 00:05:46.217 CC examples/sock/hello_world/hello_sock.o 00:05:46.217 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:46.217 LINK histogram_perf 00:05:46.217 CXX test/cpp_headers/assert.o 00:05:46.217 LINK spdk_nvme_perf 00:05:46.475 CXX test/cpp_headers/barrier.o 00:05:46.475 LINK hello_sock 00:05:46.475 LINK mem_callbacks 00:05:46.475 LINK spdk_nvme_identify 00:05:46.475 LINK spdk_top 00:05:46.475 CC examples/vmd/lsvmd/lsvmd.o 00:05:46.475 CC examples/vmd/led/led.o 00:05:46.475 CXX test/cpp_headers/base64.o 00:05:46.475 CC examples/idxd/perf/perf.o 00:05:46.733 LINK nvme_fuzz 00:05:46.733 CC test/env/vtophys/vtophys.o 00:05:46.733 LINK lsvmd 00:05:46.733 LINK led 00:05:46.733 CXX test/cpp_headers/bdev.o 00:05:46.733 CC app/spdk_dd/spdk_dd.o 00:05:46.733 CXX test/cpp_headers/bdev_module.o 00:05:46.733 CC examples/accel/perf/accel_perf.o 00:05:47.059 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:47.059 LINK vtophys 00:05:47.059 CXX test/cpp_headers/bdev_zone.o 00:05:47.059 LINK idxd_perf 00:05:47.059 CXX test/cpp_headers/bit_array.o 00:05:47.059 CXX test/cpp_headers/bit_pool.o 00:05:47.059 CXX test/cpp_headers/blob_bdev.o 00:05:47.059 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:47.059 CXX test/cpp_headers/blobfs_bdev.o 00:05:47.318 LINK hello_fsdev 00:05:47.318 CXX test/cpp_headers/blobfs.o 00:05:47.318 CXX test/cpp_headers/blob.o 00:05:47.318 LINK spdk_dd 00:05:47.318 LINK env_dpdk_post_init 00:05:47.318 CXX test/cpp_headers/conf.o 00:05:47.318 CC test/env/pci/pci_ut.o 00:05:47.318 CC test/env/memory/memory_ut.o 00:05:47.577 CXX test/cpp_headers/config.o 00:05:47.577 CXX test/cpp_headers/cpuset.o 00:05:47.577 CXX test/cpp_headers/crc16.o 00:05:47.577 LINK accel_perf 00:05:47.577 CXX test/cpp_headers/crc32.o 00:05:47.577 CXX test/cpp_headers/crc64.o 00:05:47.577 CXX test/cpp_headers/dif.o 00:05:47.577 CXX test/cpp_headers/dma.o 00:05:47.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:47.836 CXX test/cpp_headers/endian.o 00:05:47.836 CC app/fio/nvme/fio_plugin.o 00:05:47.836 CC app/fio/bdev/fio_plugin.o 00:05:47.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:47.836 CC examples/blob/hello_world/hello_blob.o 00:05:48.094 CXX test/cpp_headers/env_dpdk.o 00:05:48.094 LINK pci_ut 00:05:48.094 CC app/vhost/vhost.o 00:05:48.094 CC examples/nvme/hello_world/hello_world.o 00:05:48.353 CXX test/cpp_headers/env.o 00:05:48.353 LINK hello_blob 00:05:48.353 LINK vhost 00:05:48.353 CXX test/cpp_headers/event.o 00:05:48.353 LINK hello_world 00:05:48.611 LINK vhost_fuzz 00:05:48.611 CXX test/cpp_headers/fd_group.o 00:05:48.611 LINK spdk_nvme 00:05:48.611 CXX test/cpp_headers/fd.o 00:05:48.611 CC examples/blob/cli/blobcli.o 00:05:48.611 LINK spdk_bdev 00:05:48.611 CC examples/nvme/reconnect/reconnect.o 00:05:48.611 LINK iscsi_fuzz 00:05:48.611 CC examples/bdev/hello_world/hello_bdev.o 00:05:48.611 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:48.869 CXX test/cpp_headers/file.o 00:05:48.869 CC test/app/jsoncat/jsoncat.o 00:05:48.869 CXX test/cpp_headers/fsdev.o 00:05:48.869 CC examples/bdev/bdevperf/bdevperf.o 00:05:48.869 LINK memory_ut 00:05:48.869 CXX test/cpp_headers/fsdev_module.o 00:05:48.869 LINK jsoncat 00:05:48.869 CXX test/cpp_headers/ftl.o 00:05:49.128 LINK hello_bdev 00:05:49.128 LINK reconnect 00:05:49.128 CC test/event/event_perf/event_perf.o 00:05:49.128 CXX test/cpp_headers/gpt_spec.o 00:05:49.128 CC test/event/reactor/reactor.o 00:05:49.128 CXX test/cpp_headers/hexlify.o 00:05:49.387 LINK blobcli 00:05:49.387 CC test/app/stub/stub.o 00:05:49.387 CC test/event/reactor_perf/reactor_perf.o 00:05:49.387 CC examples/nvme/arbitration/arbitration.o 00:05:49.387 LINK event_perf 00:05:49.387 LINK nvme_manage 00:05:49.387 LINK reactor 00:05:49.387 CXX test/cpp_headers/histogram_data.o 00:05:49.387 LINK reactor_perf 00:05:49.387 LINK stub 00:05:49.387 CXX test/cpp_headers/idxd.o 00:05:49.645 CC test/event/app_repeat/app_repeat.o 00:05:49.645 CC test/event/scheduler/scheduler.o 00:05:49.645 CXX test/cpp_headers/idxd_spec.o 00:05:49.645 CC examples/nvme/hotplug/hotplug.o 00:05:49.903 LINK arbitration 00:05:49.903 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:49.903 CC examples/nvme/abort/abort.o 00:05:49.903 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:49.903 LINK app_repeat 00:05:49.903 CXX test/cpp_headers/init.o 00:05:49.903 LINK scheduler 00:05:50.161 LINK cmb_copy 00:05:50.161 LINK bdevperf 00:05:50.161 CC test/rpc_client/rpc_client_test.o 00:05:50.161 LINK pmr_persistence 00:05:50.161 CC test/nvme/aer/aer.o 00:05:50.161 CXX test/cpp_headers/ioat.o 00:05:50.161 LINK hotplug 00:05:50.420 CC test/nvme/reset/reset.o 00:05:50.420 CC test/nvme/sgl/sgl.o 00:05:50.420 LINK abort 00:05:50.420 CXX test/cpp_headers/ioat_spec.o 00:05:50.420 CXX test/cpp_headers/iscsi_spec.o 00:05:50.678 CXX test/cpp_headers/json.o 00:05:50.678 LINK rpc_client_test 00:05:50.678 CXX test/cpp_headers/jsonrpc.o 00:05:50.678 CC test/nvme/e2edp/nvme_dp.o 00:05:50.678 LINK reset 00:05:50.678 LINK aer 00:05:50.678 LINK sgl 00:05:50.936 CXX test/cpp_headers/keyring.o 00:05:50.936 CXX test/cpp_headers/keyring_module.o 00:05:50.936 CC test/nvme/overhead/overhead.o 00:05:50.936 CC test/nvme/err_injection/err_injection.o 00:05:50.936 CC examples/nvmf/nvmf/nvmf.o 00:05:51.194 CC test/accel/dif/dif.o 00:05:51.194 CC test/nvme/startup/startup.o 00:05:51.194 LINK nvme_dp 00:05:51.194 CC test/blobfs/mkfs/mkfs.o 00:05:51.194 CC test/nvme/reserve/reserve.o 00:05:51.194 CXX test/cpp_headers/likely.o 00:05:51.194 CC test/nvme/simple_copy/simple_copy.o 00:05:51.453 LINK err_injection 00:05:51.453 LINK startup 00:05:51.453 LINK overhead 00:05:51.453 LINK mkfs 00:05:51.453 CXX test/cpp_headers/log.o 00:05:51.453 CC test/nvme/connect_stress/connect_stress.o 00:05:51.711 LINK reserve 00:05:51.711 LINK nvmf 00:05:51.711 LINK simple_copy 00:05:51.711 CC test/nvme/boot_partition/boot_partition.o 00:05:51.711 CXX test/cpp_headers/lvol.o 00:05:51.711 LINK connect_stress 00:05:51.969 CC test/nvme/compliance/nvme_compliance.o 00:05:51.969 CC test/nvme/fused_ordering/fused_ordering.o 00:05:51.969 CXX test/cpp_headers/md5.o 00:05:51.969 LINK boot_partition 00:05:51.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:51.969 CXX test/cpp_headers/memory.o 00:05:51.969 CC test/nvme/fdp/fdp.o 00:05:51.969 CC test/lvol/esnap/esnap.o 00:05:52.227 CXX test/cpp_headers/mmio.o 00:05:52.227 LINK doorbell_aers 00:05:52.227 CXX test/cpp_headers/nbd.o 00:05:52.227 CXX test/cpp_headers/net.o 00:05:52.485 CC test/nvme/cuse/cuse.o 00:05:52.485 CXX test/cpp_headers/notify.o 00:05:52.485 LINK fused_ordering 00:05:52.485 CXX test/cpp_headers/nvme.o 00:05:52.485 CXX test/cpp_headers/nvme_intel.o 00:05:52.485 LINK dif 00:05:52.744 CXX test/cpp_headers/nvme_ocssd.o 00:05:52.744 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:52.744 LINK nvme_compliance 00:05:52.744 CXX test/cpp_headers/nvme_spec.o 00:05:52.744 CXX test/cpp_headers/nvme_zns.o 00:05:52.744 LINK fdp 00:05:52.744 CXX test/cpp_headers/nvmf_cmd.o 00:05:52.744 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:53.011 CXX test/cpp_headers/nvmf.o 00:05:53.011 CXX test/cpp_headers/nvmf_spec.o 00:05:53.011 CXX test/cpp_headers/nvmf_transport.o 00:05:53.269 CC test/bdev/bdevio/bdevio.o 00:05:53.269 CXX test/cpp_headers/opal.o 00:05:53.269 CXX test/cpp_headers/opal_spec.o 00:05:53.269 CXX test/cpp_headers/pci_ids.o 00:05:53.269 CXX test/cpp_headers/pipe.o 00:05:53.269 CXX test/cpp_headers/queue.o 00:05:53.269 CXX test/cpp_headers/reduce.o 00:05:53.269 CXX test/cpp_headers/rpc.o 00:05:53.269 CXX test/cpp_headers/scheduler.o 00:05:53.269 CXX test/cpp_headers/scsi.o 00:05:53.526 CXX test/cpp_headers/scsi_spec.o 00:05:53.527 CXX test/cpp_headers/sock.o 00:05:53.527 CXX test/cpp_headers/string.o 00:05:53.527 CXX test/cpp_headers/stdinc.o 00:05:53.527 CXX test/cpp_headers/thread.o 00:05:53.527 CXX test/cpp_headers/trace.o 00:05:53.785 CXX test/cpp_headers/trace_parser.o 00:05:53.785 CXX test/cpp_headers/tree.o 00:05:53.785 CXX test/cpp_headers/ublk.o 00:05:53.786 CXX test/cpp_headers/util.o 00:05:53.786 CXX test/cpp_headers/uuid.o 00:05:53.786 CXX test/cpp_headers/version.o 00:05:53.786 CXX test/cpp_headers/vfio_user_pci.o 00:05:53.786 CXX test/cpp_headers/vfio_user_spec.o 00:05:53.786 CXX test/cpp_headers/vhost.o 00:05:54.043 LINK bdevio 00:05:54.043 CXX test/cpp_headers/vmd.o 00:05:54.043 CXX test/cpp_headers/xor.o 00:05:54.043 CXX test/cpp_headers/zipf.o 00:05:54.302 LINK cuse 00:06:00.863 LINK esnap 00:06:01.122 00:06:01.122 real 2m18.272s 00:06:01.122 user 13m34.570s 00:06:01.122 sys 2m11.190s 00:06:01.122 11:14:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:01.122 11:14:23 make -- common/autotest_common.sh@10 -- $ set +x 00:06:01.122 ************************************ 00:06:01.122 END TEST make 00:06:01.122 ************************************ 00:06:01.122 11:14:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:01.122 11:14:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:01.122 11:14:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:01.122 11:14:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.122 11:14:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:01.122 11:14:23 -- pm/common@44 -- $ pid=5350 00:06:01.122 11:14:23 -- pm/common@50 -- $ kill -TERM 5350 00:06:01.122 11:14:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.122 11:14:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:01.122 11:14:23 -- pm/common@44 -- $ pid=5351 00:06:01.122 11:14:23 -- pm/common@50 -- $ kill -TERM 5351 00:06:01.122 11:14:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:01.122 11:14:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:01.122 11:14:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.122 11:14:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.122 11:14:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.122 11:14:23 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.122 11:14:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.122 11:14:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.122 11:14:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.122 11:14:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.122 11:14:23 -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.122 11:14:23 -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.122 11:14:23 -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.122 11:14:23 -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.122 11:14:23 -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.122 11:14:23 -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.122 11:14:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.122 11:14:23 -- scripts/common.sh@344 -- # case "$op" in 00:06:01.122 11:14:23 -- scripts/common.sh@345 -- # : 1 00:06:01.122 11:14:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.122 11:14:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.122 11:14:23 -- scripts/common.sh@365 -- # decimal 1 00:06:01.122 11:14:23 -- scripts/common.sh@353 -- # local d=1 00:06:01.122 11:14:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.122 11:14:23 -- scripts/common.sh@355 -- # echo 1 00:06:01.122 11:14:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.122 11:14:23 -- scripts/common.sh@366 -- # decimal 2 00:06:01.122 11:14:23 -- scripts/common.sh@353 -- # local d=2 00:06:01.122 11:14:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.122 11:14:23 -- scripts/common.sh@355 -- # echo 2 00:06:01.123 11:14:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.123 11:14:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.123 11:14:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.123 11:14:23 -- scripts/common.sh@368 -- # return 0 00:06:01.123 11:14:23 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.123 11:14:23 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.123 --rc genhtml_branch_coverage=1 00:06:01.123 --rc genhtml_function_coverage=1 00:06:01.123 --rc genhtml_legend=1 00:06:01.123 --rc geninfo_all_blocks=1 00:06:01.123 --rc geninfo_unexecuted_blocks=1 00:06:01.123 00:06:01.123 ' 00:06:01.123 11:14:23 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.123 --rc genhtml_branch_coverage=1 00:06:01.123 --rc genhtml_function_coverage=1 00:06:01.123 --rc genhtml_legend=1 00:06:01.123 --rc geninfo_all_blocks=1 00:06:01.123 --rc geninfo_unexecuted_blocks=1 00:06:01.123 00:06:01.123 ' 00:06:01.123 11:14:23 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.123 --rc genhtml_branch_coverage=1 00:06:01.123 --rc genhtml_function_coverage=1 00:06:01.123 --rc genhtml_legend=1 00:06:01.123 --rc geninfo_all_blocks=1 00:06:01.123 --rc geninfo_unexecuted_blocks=1 00:06:01.123 00:06:01.123 ' 00:06:01.123 11:14:23 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.123 --rc genhtml_branch_coverage=1 00:06:01.123 --rc genhtml_function_coverage=1 00:06:01.123 --rc genhtml_legend=1 00:06:01.123 --rc geninfo_all_blocks=1 00:06:01.123 --rc geninfo_unexecuted_blocks=1 00:06:01.123 00:06:01.123 ' 00:06:01.123 11:14:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.123 11:14:23 -- nvmf/common.sh@7 -- # uname -s 00:06:01.123 11:14:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.123 11:14:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.123 11:14:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.123 11:14:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.123 11:14:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.123 11:14:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.123 11:14:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.123 11:14:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.123 11:14:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.123 11:14:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.381 11:14:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:06:01.381 11:14:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:06:01.381 11:14:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.381 11:14:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.381 11:14:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.381 11:14:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.381 11:14:23 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.381 11:14:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.381 11:14:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.381 11:14:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.381 11:14:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.381 11:14:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.381 11:14:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.381 11:14:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.381 11:14:23 -- paths/export.sh@5 -- # export PATH 00:06:01.381 11:14:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.381 11:14:23 -- nvmf/common.sh@51 -- # : 0 00:06:01.381 11:14:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.381 11:14:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.381 11:14:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.381 11:14:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.381 11:14:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.381 11:14:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.381 11:14:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.381 11:14:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.381 11:14:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.381 11:14:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:01.381 11:14:23 -- spdk/autotest.sh@32 -- # uname -s 00:06:01.381 11:14:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:01.381 11:14:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:01.381 11:14:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:01.381 11:14:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:01.381 11:14:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:01.381 11:14:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:01.381 11:14:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:01.381 11:14:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:01.381 11:14:23 -- spdk/autotest.sh@48 -- # udevadm_pid=55297 00:06:01.381 11:14:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:01.381 11:14:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:01.381 11:14:23 -- pm/common@17 -- # local monitor 00:06:01.381 11:14:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.381 11:14:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.381 11:14:23 -- pm/common@21 -- # date +%s 00:06:01.381 11:14:23 -- pm/common@25 -- # sleep 1 00:06:01.382 11:14:23 -- pm/common@21 -- # date +%s 00:06:01.382 11:14:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733829263 00:06:01.382 11:14:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733829263 00:06:01.382 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733829263_collect-vmstat.pm.log 00:06:01.382 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733829263_collect-cpu-load.pm.log 00:06:02.316 11:14:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:02.316 11:14:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:02.317 11:14:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.317 11:14:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.317 11:14:24 -- spdk/autotest.sh@59 -- # create_test_list 00:06:02.317 11:14:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:02.317 11:14:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.317 11:14:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:02.317 11:14:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:02.317 11:14:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:02.317 11:14:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:02.317 11:14:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:02.317 11:14:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:02.317 11:14:24 -- common/autotest_common.sh@1457 -- # uname 00:06:02.317 11:14:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:02.317 11:14:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:02.317 11:14:24 -- common/autotest_common.sh@1477 -- # uname 00:06:02.317 11:14:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:02.317 11:14:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:02.317 11:14:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:02.575 lcov: LCOV version 1.15 00:06:02.575 11:14:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:20.729 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:20.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:38.821 11:15:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:38.821 11:15:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.821 11:15:00 -- common/autotest_common.sh@10 -- # set +x 00:06:38.821 11:15:00 -- spdk/autotest.sh@78 -- # rm -f 00:06:38.821 11:15:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.647 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:39.647 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:39.944 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:39.944 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:39.944 11:15:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:39.944 11:15:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:39.944 11:15:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:39.944 11:15:01 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:39.944 11:15:01 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:39.944 11:15:01 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:39.944 11:15:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:39.944 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:39.944 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.944 11:15:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:39.944 11:15:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.944 11:15:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:39.944 11:15:01 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:39.945 11:15:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:39.945 11:15:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.945 11:15:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:39.945 11:15:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.945 11:15:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.945 11:15:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:39.945 11:15:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:39.945 11:15:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:39.945 No valid GPT data, bailing 00:06:39.945 11:15:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:39.945 11:15:01 -- scripts/common.sh@394 -- # pt= 00:06:39.945 11:15:01 -- scripts/common.sh@395 -- # return 1 00:06:39.945 11:15:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:39.945 1+0 records in 00:06:39.945 1+0 records out 00:06:39.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117979 s, 88.9 MB/s 00:06:39.945 11:15:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.945 11:15:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.945 11:15:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:39.945 11:15:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:39.945 11:15:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:39.945 No valid GPT data, bailing 00:06:39.945 11:15:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:39.945 11:15:02 -- scripts/common.sh@394 -- # pt= 00:06:39.945 11:15:02 -- scripts/common.sh@395 -- # return 1 00:06:39.945 11:15:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:39.945 1+0 records in 00:06:39.945 1+0 records out 00:06:39.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406789 s, 258 MB/s 00:06:39.945 11:15:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.945 11:15:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.945 11:15:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:39.945 11:15:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:39.945 11:15:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:40.226 No valid GPT data, bailing 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # pt= 00:06:40.226 11:15:02 -- scripts/common.sh@395 -- # return 1 00:06:40.226 11:15:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:40.226 1+0 records in 00:06:40.226 1+0 records out 00:06:40.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444298 s, 236 MB/s 00:06:40.226 11:15:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.226 11:15:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.226 11:15:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:40.226 11:15:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:40.226 11:15:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:40.226 No valid GPT data, bailing 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # pt= 00:06:40.226 11:15:02 -- scripts/common.sh@395 -- # return 1 00:06:40.226 11:15:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:40.226 1+0 records in 00:06:40.226 1+0 records out 00:06:40.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519955 s, 202 MB/s 00:06:40.226 11:15:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.226 11:15:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.226 11:15:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:40.226 11:15:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:40.226 11:15:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:40.226 No valid GPT data, bailing 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # pt= 00:06:40.226 11:15:02 -- scripts/common.sh@395 -- # return 1 00:06:40.226 11:15:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:40.226 1+0 records in 00:06:40.226 1+0 records out 00:06:40.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430306 s, 244 MB/s 00:06:40.226 11:15:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.226 11:15:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.226 11:15:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:40.226 11:15:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:40.226 11:15:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:40.226 No valid GPT data, bailing 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:40.226 11:15:02 -- scripts/common.sh@394 -- # pt= 00:06:40.226 11:15:02 -- scripts/common.sh@395 -- # return 1 00:06:40.226 11:15:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:40.226 1+0 records in 00:06:40.226 1+0 records out 00:06:40.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424029 s, 247 MB/s 00:06:40.226 11:15:02 -- spdk/autotest.sh@105 -- # sync 00:06:40.485 11:15:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:40.485 11:15:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:40.485 11:15:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:42.386 11:15:04 -- spdk/autotest.sh@111 -- # uname -s 00:06:42.386 11:15:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:42.386 11:15:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:42.386 11:15:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:42.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.217 Hugepages 00:06:43.217 node hugesize free / total 00:06:43.475 node0 1048576kB 0 / 0 00:06:43.476 node0 2048kB 0 / 0 00:06:43.476 00:06:43.476 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:43.476 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:43.476 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:43.476 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:43.734 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:43.734 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:43.734 11:15:05 -- spdk/autotest.sh@117 -- # uname -s 00:06:43.734 11:15:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:43.734 11:15:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:43.734 11:15:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.868 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.868 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.868 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.868 11:15:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:46.242 11:15:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:46.242 11:15:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:46.242 11:15:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:46.242 11:15:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:46.242 11:15:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:46.242 11:15:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:46.242 11:15:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:46.242 11:15:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:46.242 11:15:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:46.242 11:15:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:46.242 11:15:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:46.242 11:15:08 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:46.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.501 Waiting for block devices as requested 00:06:46.501 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.758 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.758 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.758 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:52.066 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:52.066 11:15:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:52.066 11:15:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1543 -- # continue 00:06:52.066 11:15:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:52.066 11:15:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1543 -- # continue 00:06:52.066 11:15:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:52.066 11:15:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1543 -- # continue 00:06:52.066 11:15:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:52.066 11:15:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:52.066 11:15:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:52.066 11:15:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:52.066 11:15:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:52.067 11:15:14 -- common/autotest_common.sh@1543 -- # continue 00:06:52.067 11:15:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:52.067 11:15:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.067 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:06:52.067 11:15:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:52.067 11:15:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.067 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:06:52.067 11:15:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.221 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.221 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.221 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.221 11:15:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:53.221 11:15:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.221 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.479 11:15:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:53.479 11:15:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:53.479 11:15:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:53.479 11:15:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:53.479 11:15:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:53.479 11:15:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:53.479 11:15:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:53.479 11:15:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:53.479 11:15:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:53.479 11:15:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:53.479 11:15:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:53.479 11:15:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.479 11:15:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:53.479 11:15:15 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:53.479 11:15:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:53.479 11:15:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:53.479 11:15:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.479 11:15:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:53.479 11:15:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.479 11:15:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:53.479 11:15:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.479 11:15:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:53.479 11:15:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:53.479 11:15:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.479 11:15:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:53.479 11:15:15 -- common/autotest_common.sh@1572 -- # return 0 00:06:53.479 11:15:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:53.479 11:15:15 -- common/autotest_common.sh@1580 -- # return 0 00:06:53.479 11:15:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:53.479 11:15:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:53.479 11:15:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:53.479 11:15:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:53.479 11:15:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:53.479 11:15:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.479 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.479 11:15:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:53.479 11:15:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:53.479 11:15:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.479 11:15:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.479 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.479 ************************************ 00:06:53.479 START TEST env 00:06:53.479 ************************************ 00:06:53.479 11:15:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:53.479 * Looking for test storage... 00:06:53.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:53.479 11:15:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.479 11:15:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.479 11:15:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.738 11:15:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.738 11:15:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.738 11:15:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.738 11:15:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.738 11:15:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.738 11:15:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.738 11:15:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.738 11:15:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.738 11:15:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.738 11:15:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.738 11:15:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.738 11:15:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.738 11:15:15 env -- scripts/common.sh@344 -- # case "$op" in 00:06:53.738 11:15:15 env -- scripts/common.sh@345 -- # : 1 00:06:53.738 11:15:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.738 11:15:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.738 11:15:15 env -- scripts/common.sh@365 -- # decimal 1 00:06:53.738 11:15:15 env -- scripts/common.sh@353 -- # local d=1 00:06:53.738 11:15:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.738 11:15:15 env -- scripts/common.sh@355 -- # echo 1 00:06:53.738 11:15:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.738 11:15:15 env -- scripts/common.sh@366 -- # decimal 2 00:06:53.738 11:15:15 env -- scripts/common.sh@353 -- # local d=2 00:06:53.738 11:15:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.738 11:15:15 env -- scripts/common.sh@355 -- # echo 2 00:06:53.738 11:15:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.738 11:15:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.738 11:15:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.738 11:15:15 env -- scripts/common.sh@368 -- # return 0 00:06:53.738 11:15:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.738 11:15:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.738 --rc genhtml_branch_coverage=1 00:06:53.738 --rc genhtml_function_coverage=1 00:06:53.738 --rc genhtml_legend=1 00:06:53.738 --rc geninfo_all_blocks=1 00:06:53.738 --rc geninfo_unexecuted_blocks=1 00:06:53.738 00:06:53.738 ' 00:06:53.738 11:15:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.738 --rc genhtml_branch_coverage=1 00:06:53.738 --rc genhtml_function_coverage=1 00:06:53.738 --rc genhtml_legend=1 00:06:53.739 --rc geninfo_all_blocks=1 00:06:53.739 --rc geninfo_unexecuted_blocks=1 00:06:53.739 00:06:53.739 ' 00:06:53.739 11:15:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.739 --rc genhtml_branch_coverage=1 00:06:53.739 --rc genhtml_function_coverage=1 00:06:53.739 --rc genhtml_legend=1 00:06:53.739 --rc geninfo_all_blocks=1 00:06:53.739 --rc geninfo_unexecuted_blocks=1 00:06:53.739 00:06:53.739 ' 00:06:53.739 11:15:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.739 --rc genhtml_branch_coverage=1 00:06:53.739 --rc genhtml_function_coverage=1 00:06:53.739 --rc genhtml_legend=1 00:06:53.739 --rc geninfo_all_blocks=1 00:06:53.739 --rc geninfo_unexecuted_blocks=1 00:06:53.739 00:06:53.739 ' 00:06:53.739 11:15:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:53.739 11:15:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.739 11:15:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.739 11:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.739 ************************************ 00:06:53.739 START TEST env_memory 00:06:53.739 ************************************ 00:06:53.739 11:15:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:53.739 00:06:53.739 00:06:53.739 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.739 http://cunit.sourceforge.net/ 00:06:53.739 00:06:53.739 00:06:53.739 Suite: memory 00:06:53.739 Test: alloc and free memory map ...[2024-12-10 11:15:15.788313] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:53.739 passed 00:06:53.739 Test: mem map translation ...[2024-12-10 11:15:15.853342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:53.739 [2024-12-10 11:15:15.853455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:53.739 [2024-12-10 11:15:15.853591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:53.739 [2024-12-10 11:15:15.853695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:53.997 passed 00:06:53.997 Test: mem map registration ...[2024-12-10 11:15:15.955274] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:53.997 [2024-12-10 11:15:15.955388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:53.997 passed 00:06:53.997 Test: mem map adjacent registrations ...passed 00:06:53.997 00:06:53.997 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.997 suites 1 1 n/a 0 0 00:06:53.997 tests 4 4 4 0 0 00:06:53.997 asserts 152 152 152 0 n/a 00:06:53.997 00:06:53.997 Elapsed time = 0.349 seconds 00:06:53.997 00:06:53.997 real 0m0.388s 00:06:53.997 user 0m0.358s 00:06:53.997 sys 0m0.022s 00:06:53.997 11:15:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.997 ************************************ 00:06:53.997 END TEST env_memory 00:06:53.997 ************************************ 00:06:53.997 11:15:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:53.997 11:15:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:53.997 11:15:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.997 11:15:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.998 11:15:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.998 ************************************ 00:06:53.998 START TEST env_vtophys 00:06:53.998 ************************************ 00:06:53.998 11:15:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:54.256 EAL: lib.eal log level changed from notice to debug 00:06:54.256 EAL: Detected lcore 0 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 1 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 2 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 3 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 4 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 5 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 6 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 7 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 8 as core 0 on socket 0 00:06:54.256 EAL: Detected lcore 9 as core 0 on socket 0 00:06:54.256 EAL: Maximum logical cores by configuration: 128 00:06:54.256 EAL: Detected CPU lcores: 10 00:06:54.256 EAL: Detected NUMA nodes: 1 00:06:54.256 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:54.256 EAL: Detected shared linkage of DPDK 00:06:54.256 EAL: No shared files mode enabled, IPC will be disabled 00:06:54.256 EAL: Selected IOVA mode 'PA' 00:06:54.256 EAL: Probing VFIO support... 00:06:54.256 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:54.256 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:54.256 EAL: Ask a virtual area of 0x2e000 bytes 00:06:54.256 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:54.256 EAL: Setting up physically contiguous memory... 00:06:54.256 EAL: Setting maximum number of open files to 524288 00:06:54.256 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:54.256 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:54.256 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.256 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:54.256 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.256 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.256 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:54.256 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:54.256 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.256 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:54.256 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.256 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.256 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:54.256 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:54.256 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.256 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:54.256 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.256 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.256 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:54.256 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:54.256 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.256 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:54.256 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.256 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.256 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:54.256 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:54.256 EAL: Hugepages will be freed exactly as allocated. 00:06:54.256 EAL: No shared files mode enabled, IPC is disabled 00:06:54.256 EAL: No shared files mode enabled, IPC is disabled 00:06:54.256 EAL: TSC frequency is ~2200000 KHz 00:06:54.256 EAL: Main lcore 0 is ready (tid=7f6fa4275a40;cpuset=[0]) 00:06:54.256 EAL: Trying to obtain current memory policy. 00:06:54.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.256 EAL: Restoring previous memory policy: 0 00:06:54.256 EAL: request: mp_malloc_sync 00:06:54.256 EAL: No shared files mode enabled, IPC is disabled 00:06:54.256 EAL: Heap on socket 0 was expanded by 2MB 00:06:54.256 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:54.256 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:54.256 EAL: Mem event callback 'spdk:(nil)' registered 00:06:54.256 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:54.256 00:06:54.256 00:06:54.256 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.256 http://cunit.sourceforge.net/ 00:06:54.256 00:06:54.256 00:06:54.256 Suite: components_suite 00:06:54.825 Test: vtophys_malloc_test ...passed 00:06:54.825 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:54.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.825 EAL: Restoring previous memory policy: 4 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was expanded by 4MB 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was shrunk by 4MB 00:06:54.825 EAL: Trying to obtain current memory policy. 00:06:54.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.825 EAL: Restoring previous memory policy: 4 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was expanded by 6MB 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was shrunk by 6MB 00:06:54.825 EAL: Trying to obtain current memory policy. 00:06:54.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.825 EAL: Restoring previous memory policy: 4 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was expanded by 10MB 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was shrunk by 10MB 00:06:54.825 EAL: Trying to obtain current memory policy. 00:06:54.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.825 EAL: Restoring previous memory policy: 4 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was expanded by 18MB 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was shrunk by 18MB 00:06:54.825 EAL: Trying to obtain current memory policy. 00:06:54.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.825 EAL: Restoring previous memory policy: 4 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was expanded by 34MB 00:06:54.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.825 EAL: request: mp_malloc_sync 00:06:54.825 EAL: No shared files mode enabled, IPC is disabled 00:06:54.825 EAL: Heap on socket 0 was shrunk by 34MB 00:06:55.084 EAL: Trying to obtain current memory policy. 00:06:55.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.084 EAL: Restoring previous memory policy: 4 00:06:55.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.084 EAL: request: mp_malloc_sync 00:06:55.084 EAL: No shared files mode enabled, IPC is disabled 00:06:55.084 EAL: Heap on socket 0 was expanded by 66MB 00:06:55.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.084 EAL: request: mp_malloc_sync 00:06:55.084 EAL: No shared files mode enabled, IPC is disabled 00:06:55.084 EAL: Heap on socket 0 was shrunk by 66MB 00:06:55.084 EAL: Trying to obtain current memory policy. 00:06:55.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.084 EAL: Restoring previous memory policy: 4 00:06:55.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.084 EAL: request: mp_malloc_sync 00:06:55.084 EAL: No shared files mode enabled, IPC is disabled 00:06:55.084 EAL: Heap on socket 0 was expanded by 130MB 00:06:55.342 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.342 EAL: request: mp_malloc_sync 00:06:55.342 EAL: No shared files mode enabled, IPC is disabled 00:06:55.342 EAL: Heap on socket 0 was shrunk by 130MB 00:06:55.600 EAL: Trying to obtain current memory policy. 00:06:55.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.600 EAL: Restoring previous memory policy: 4 00:06:55.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.600 EAL: request: mp_malloc_sync 00:06:55.600 EAL: No shared files mode enabled, IPC is disabled 00:06:55.600 EAL: Heap on socket 0 was expanded by 258MB 00:06:56.167 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.167 EAL: request: mp_malloc_sync 00:06:56.167 EAL: No shared files mode enabled, IPC is disabled 00:06:56.167 EAL: Heap on socket 0 was shrunk by 258MB 00:06:56.425 EAL: Trying to obtain current memory policy. 00:06:56.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.425 EAL: Restoring previous memory policy: 4 00:06:56.425 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.425 EAL: request: mp_malloc_sync 00:06:56.425 EAL: No shared files mode enabled, IPC is disabled 00:06:56.425 EAL: Heap on socket 0 was expanded by 514MB 00:06:57.361 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.361 EAL: request: mp_malloc_sync 00:06:57.361 EAL: No shared files mode enabled, IPC is disabled 00:06:57.361 EAL: Heap on socket 0 was shrunk by 514MB 00:06:58.296 EAL: Trying to obtain current memory policy. 00:06:58.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.296 EAL: Restoring previous memory policy: 4 00:06:58.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.296 EAL: request: mp_malloc_sync 00:06:58.296 EAL: No shared files mode enabled, IPC is disabled 00:06:58.296 EAL: Heap on socket 0 was expanded by 1026MB 00:07:00.197 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.197 EAL: request: mp_malloc_sync 00:07:00.197 EAL: No shared files mode enabled, IPC is disabled 00:07:00.197 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:01.571 passed 00:07:01.571 00:07:01.571 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.571 suites 1 1 n/a 0 0 00:07:01.571 tests 2 2 2 0 0 00:07:01.571 asserts 5677 5677 5677 0 n/a 00:07:01.571 00:07:01.571 Elapsed time = 7.176 seconds 00:07:01.571 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.571 EAL: request: mp_malloc_sync 00:07:01.571 EAL: No shared files mode enabled, IPC is disabled 00:07:01.571 EAL: Heap on socket 0 was shrunk by 2MB 00:07:01.571 EAL: No shared files mode enabled, IPC is disabled 00:07:01.571 EAL: No shared files mode enabled, IPC is disabled 00:07:01.571 EAL: No shared files mode enabled, IPC is disabled 00:07:01.571 00:07:01.571 real 0m7.526s 00:07:01.571 user 0m6.619s 00:07:01.571 sys 0m0.733s 00:07:01.571 11:15:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.571 11:15:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:01.571 ************************************ 00:07:01.571 END TEST env_vtophys 00:07:01.571 ************************************ 00:07:01.571 11:15:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:01.571 11:15:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.571 11:15:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.571 11:15:23 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.571 ************************************ 00:07:01.571 START TEST env_pci 00:07:01.571 ************************************ 00:07:01.571 11:15:23 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:01.829 00:07:01.829 00:07:01.829 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.829 http://cunit.sourceforge.net/ 00:07:01.829 00:07:01.829 00:07:01.829 Suite: pci 00:07:01.829 Test: pci_hook ...[2024-12-10 11:15:23.768564] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58172 has claimed it 00:07:01.829 passed 00:07:01.829 00:07:01.829 EAL: Cannot find device (10000:00:01.0) 00:07:01.829 EAL: Failed to attach device on primary process 00:07:01.829 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.829 suites 1 1 n/a 0 0 00:07:01.829 tests 1 1 1 0 0 00:07:01.829 asserts 25 25 25 0 n/a 00:07:01.829 00:07:01.829 Elapsed time = 0.009 seconds 00:07:01.829 00:07:01.829 real 0m0.092s 00:07:01.829 user 0m0.041s 00:07:01.829 sys 0m0.050s 00:07:01.829 11:15:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.829 11:15:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:01.829 ************************************ 00:07:01.829 END TEST env_pci 00:07:01.829 ************************************ 00:07:01.829 11:15:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:01.829 11:15:23 env -- env/env.sh@15 -- # uname 00:07:01.829 11:15:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:01.829 11:15:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:01.829 11:15:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.829 11:15:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.829 11:15:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.829 11:15:23 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.829 ************************************ 00:07:01.829 START TEST env_dpdk_post_init 00:07:01.829 ************************************ 00:07:01.829 11:15:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.829 EAL: Detected CPU lcores: 10 00:07:01.829 EAL: Detected NUMA nodes: 1 00:07:01.829 EAL: Detected shared linkage of DPDK 00:07:01.829 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:01.829 EAL: Selected IOVA mode 'PA' 00:07:02.088 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:02.088 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:02.088 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:02.088 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:02.088 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:02.088 Starting DPDK initialization... 00:07:02.088 Starting SPDK post initialization... 00:07:02.088 SPDK NVMe probe 00:07:02.088 Attaching to 0000:00:10.0 00:07:02.088 Attaching to 0000:00:11.0 00:07:02.088 Attaching to 0000:00:12.0 00:07:02.088 Attaching to 0000:00:13.0 00:07:02.088 Attached to 0000:00:10.0 00:07:02.088 Attached to 0000:00:11.0 00:07:02.088 Attached to 0000:00:13.0 00:07:02.088 Attached to 0000:00:12.0 00:07:02.088 Cleaning up... 00:07:02.088 ************************************ 00:07:02.088 END TEST env_dpdk_post_init 00:07:02.088 ************************************ 00:07:02.088 00:07:02.088 real 0m0.317s 00:07:02.088 user 0m0.107s 00:07:02.088 sys 0m0.110s 00:07:02.088 11:15:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.088 11:15:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.088 11:15:24 env -- env/env.sh@26 -- # uname 00:07:02.088 11:15:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:02.088 11:15:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:02.088 11:15:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.088 11:15:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.088 11:15:24 env -- common/autotest_common.sh@10 -- # set +x 00:07:02.088 ************************************ 00:07:02.088 START TEST env_mem_callbacks 00:07:02.088 ************************************ 00:07:02.088 11:15:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:02.346 EAL: Detected CPU lcores: 10 00:07:02.346 EAL: Detected NUMA nodes: 1 00:07:02.346 EAL: Detected shared linkage of DPDK 00:07:02.346 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:02.346 EAL: Selected IOVA mode 'PA' 00:07:02.346 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:02.346 00:07:02.346 00:07:02.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.346 http://cunit.sourceforge.net/ 00:07:02.346 00:07:02.346 00:07:02.346 Suite: memory 00:07:02.346 Test: test ... 00:07:02.346 register 0x200000200000 2097152 00:07:02.346 malloc 3145728 00:07:02.346 register 0x200000400000 4194304 00:07:02.346 buf 0x2000004fffc0 len 3145728 PASSED 00:07:02.346 malloc 64 00:07:02.346 buf 0x2000004ffec0 len 64 PASSED 00:07:02.346 malloc 4194304 00:07:02.346 register 0x200000800000 6291456 00:07:02.346 buf 0x2000009fffc0 len 4194304 PASSED 00:07:02.346 free 0x2000004fffc0 3145728 00:07:02.346 free 0x2000004ffec0 64 00:07:02.346 unregister 0x200000400000 4194304 PASSED 00:07:02.346 free 0x2000009fffc0 4194304 00:07:02.346 unregister 0x200000800000 6291456 PASSED 00:07:02.346 malloc 8388608 00:07:02.346 register 0x200000400000 10485760 00:07:02.346 buf 0x2000005fffc0 len 8388608 PASSED 00:07:02.346 free 0x2000005fffc0 8388608 00:07:02.346 unregister 0x200000400000 10485760 PASSED 00:07:02.346 passed 00:07:02.346 00:07:02.346 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.346 suites 1 1 n/a 0 0 00:07:02.346 tests 1 1 1 0 0 00:07:02.346 asserts 15 15 15 0 n/a 00:07:02.346 00:07:02.346 Elapsed time = 0.077 seconds 00:07:02.604 00:07:02.604 real 0m0.287s 00:07:02.604 user 0m0.114s 00:07:02.604 sys 0m0.066s 00:07:02.604 11:15:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.604 ************************************ 00:07:02.604 END TEST env_mem_callbacks 00:07:02.604 ************************************ 00:07:02.604 11:15:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:02.604 ************************************ 00:07:02.604 END TEST env 00:07:02.604 ************************************ 00:07:02.604 00:07:02.604 real 0m9.042s 00:07:02.604 user 0m7.425s 00:07:02.604 sys 0m1.215s 00:07:02.604 11:15:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.604 11:15:24 env -- common/autotest_common.sh@10 -- # set +x 00:07:02.604 11:15:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:02.604 11:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.604 11:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.604 11:15:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.604 ************************************ 00:07:02.604 START TEST rpc 00:07:02.604 ************************************ 00:07:02.604 11:15:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:02.604 * Looking for test storage... 00:07:02.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:02.604 11:15:24 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.604 11:15:24 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.604 11:15:24 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.863 11:15:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.863 11:15:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.863 11:15:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.863 11:15:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.863 11:15:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.863 11:15:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:02.863 11:15:24 rpc -- scripts/common.sh@345 -- # : 1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.863 11:15:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.863 11:15:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@353 -- # local d=1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.863 11:15:24 rpc -- scripts/common.sh@355 -- # echo 1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.863 11:15:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@353 -- # local d=2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.863 11:15:24 rpc -- scripts/common.sh@355 -- # echo 2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.863 11:15:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.863 11:15:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.863 11:15:24 rpc -- scripts/common.sh@368 -- # return 0 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.863 --rc genhtml_branch_coverage=1 00:07:02.863 --rc genhtml_function_coverage=1 00:07:02.863 --rc genhtml_legend=1 00:07:02.863 --rc geninfo_all_blocks=1 00:07:02.863 --rc geninfo_unexecuted_blocks=1 00:07:02.863 00:07:02.863 ' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.863 --rc genhtml_branch_coverage=1 00:07:02.863 --rc genhtml_function_coverage=1 00:07:02.863 --rc genhtml_legend=1 00:07:02.863 --rc geninfo_all_blocks=1 00:07:02.863 --rc geninfo_unexecuted_blocks=1 00:07:02.863 00:07:02.863 ' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.863 --rc genhtml_branch_coverage=1 00:07:02.863 --rc genhtml_function_coverage=1 00:07:02.863 --rc genhtml_legend=1 00:07:02.863 --rc geninfo_all_blocks=1 00:07:02.863 --rc geninfo_unexecuted_blocks=1 00:07:02.863 00:07:02.863 ' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.863 --rc genhtml_branch_coverage=1 00:07:02.863 --rc genhtml_function_coverage=1 00:07:02.863 --rc genhtml_legend=1 00:07:02.863 --rc geninfo_all_blocks=1 00:07:02.863 --rc geninfo_unexecuted_blocks=1 00:07:02.863 00:07:02.863 ' 00:07:02.863 11:15:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58299 00:07:02.863 11:15:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:02.863 11:15:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.863 11:15:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58299 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 58299 ']' 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.863 11:15:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.863 [2024-12-10 11:15:24.931439] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:02.863 [2024-12-10 11:15:24.932168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:07:03.122 [2024-12-10 11:15:25.121525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.122 [2024-12-10 11:15:25.253292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:03.122 [2024-12-10 11:15:25.253556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58299' to capture a snapshot of events at runtime. 00:07:03.122 [2024-12-10 11:15:25.253732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.122 [2024-12-10 11:15:25.253890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.122 [2024-12-10 11:15:25.253944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58299 for offline analysis/debug. 00:07:03.122 [2024-12-10 11:15:25.255458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.056 11:15:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.056 11:15:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.056 11:15:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:04.056 11:15:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:04.056 11:15:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:04.056 11:15:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:04.056 11:15:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.056 11:15:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.056 11:15:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.056 ************************************ 00:07:04.056 START TEST rpc_integrity 00:07:04.056 ************************************ 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:04.056 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.056 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:04.056 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:04.056 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:04.056 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.056 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.315 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:04.315 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:04.315 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.315 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.315 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:04.315 { 00:07:04.315 "name": "Malloc0", 00:07:04.315 "aliases": [ 00:07:04.315 "44b76afe-201f-41ac-9900-5ee5749d6c26" 00:07:04.315 ], 00:07:04.315 "product_name": "Malloc disk", 00:07:04.315 "block_size": 512, 00:07:04.315 "num_blocks": 16384, 00:07:04.315 "uuid": "44b76afe-201f-41ac-9900-5ee5749d6c26", 00:07:04.315 "assigned_rate_limits": { 00:07:04.315 "rw_ios_per_sec": 0, 00:07:04.315 "rw_mbytes_per_sec": 0, 00:07:04.315 "r_mbytes_per_sec": 0, 00:07:04.315 "w_mbytes_per_sec": 0 00:07:04.315 }, 00:07:04.315 "claimed": false, 00:07:04.315 "zoned": false, 00:07:04.315 "supported_io_types": { 00:07:04.315 "read": true, 00:07:04.315 "write": true, 00:07:04.315 "unmap": true, 00:07:04.315 "flush": true, 00:07:04.315 "reset": true, 00:07:04.315 "nvme_admin": false, 00:07:04.315 "nvme_io": false, 00:07:04.315 "nvme_io_md": false, 00:07:04.315 "write_zeroes": true, 00:07:04.315 "zcopy": true, 00:07:04.315 "get_zone_info": false, 00:07:04.315 "zone_management": false, 00:07:04.316 "zone_append": false, 00:07:04.316 "compare": false, 00:07:04.316 "compare_and_write": false, 00:07:04.316 "abort": true, 00:07:04.316 "seek_hole": false, 00:07:04.316 "seek_data": false, 00:07:04.316 "copy": true, 00:07:04.316 "nvme_iov_md": false 00:07:04.316 }, 00:07:04.316 "memory_domains": [ 00:07:04.316 { 00:07:04.316 "dma_device_id": "system", 00:07:04.316 "dma_device_type": 1 00:07:04.316 }, 00:07:04.316 { 00:07:04.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.316 "dma_device_type": 2 00:07:04.316 } 00:07:04.316 ], 00:07:04.316 "driver_specific": {} 00:07:04.316 } 00:07:04.316 ]' 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.316 [2024-12-10 11:15:26.305667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:04.316 [2024-12-10 11:15:26.305757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.316 [2024-12-10 11:15:26.305798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:04.316 [2024-12-10 11:15:26.305818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.316 [2024-12-10 11:15:26.308860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.316 [2024-12-10 11:15:26.308920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:04.316 Passthru0 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:04.316 { 00:07:04.316 "name": "Malloc0", 00:07:04.316 "aliases": [ 00:07:04.316 "44b76afe-201f-41ac-9900-5ee5749d6c26" 00:07:04.316 ], 00:07:04.316 "product_name": "Malloc disk", 00:07:04.316 "block_size": 512, 00:07:04.316 "num_blocks": 16384, 00:07:04.316 "uuid": "44b76afe-201f-41ac-9900-5ee5749d6c26", 00:07:04.316 "assigned_rate_limits": { 00:07:04.316 "rw_ios_per_sec": 0, 00:07:04.316 "rw_mbytes_per_sec": 0, 00:07:04.316 "r_mbytes_per_sec": 0, 00:07:04.316 "w_mbytes_per_sec": 0 00:07:04.316 }, 00:07:04.316 "claimed": true, 00:07:04.316 "claim_type": "exclusive_write", 00:07:04.316 "zoned": false, 00:07:04.316 "supported_io_types": { 00:07:04.316 "read": true, 00:07:04.316 "write": true, 00:07:04.316 "unmap": true, 00:07:04.316 "flush": true, 00:07:04.316 "reset": true, 00:07:04.316 "nvme_admin": false, 00:07:04.316 "nvme_io": false, 00:07:04.316 "nvme_io_md": false, 00:07:04.316 "write_zeroes": true, 00:07:04.316 "zcopy": true, 00:07:04.316 "get_zone_info": false, 00:07:04.316 "zone_management": false, 00:07:04.316 "zone_append": false, 00:07:04.316 "compare": false, 00:07:04.316 "compare_and_write": false, 00:07:04.316 "abort": true, 00:07:04.316 "seek_hole": false, 00:07:04.316 "seek_data": false, 00:07:04.316 "copy": true, 00:07:04.316 "nvme_iov_md": false 00:07:04.316 }, 00:07:04.316 "memory_domains": [ 00:07:04.316 { 00:07:04.316 "dma_device_id": "system", 00:07:04.316 "dma_device_type": 1 00:07:04.316 }, 00:07:04.316 { 00:07:04.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.316 "dma_device_type": 2 00:07:04.316 } 00:07:04.316 ], 00:07:04.316 "driver_specific": {} 00:07:04.316 }, 00:07:04.316 { 00:07:04.316 "name": "Passthru0", 00:07:04.316 "aliases": [ 00:07:04.316 "2b6f534c-0528-52a6-b877-8790788f4c15" 00:07:04.316 ], 00:07:04.316 "product_name": "passthru", 00:07:04.316 "block_size": 512, 00:07:04.316 "num_blocks": 16384, 00:07:04.316 "uuid": "2b6f534c-0528-52a6-b877-8790788f4c15", 00:07:04.316 "assigned_rate_limits": { 00:07:04.316 "rw_ios_per_sec": 0, 00:07:04.316 "rw_mbytes_per_sec": 0, 00:07:04.316 "r_mbytes_per_sec": 0, 00:07:04.316 "w_mbytes_per_sec": 0 00:07:04.316 }, 00:07:04.316 "claimed": false, 00:07:04.316 "zoned": false, 00:07:04.316 "supported_io_types": { 00:07:04.316 "read": true, 00:07:04.316 "write": true, 00:07:04.316 "unmap": true, 00:07:04.316 "flush": true, 00:07:04.316 "reset": true, 00:07:04.316 "nvme_admin": false, 00:07:04.316 "nvme_io": false, 00:07:04.316 "nvme_io_md": false, 00:07:04.316 "write_zeroes": true, 00:07:04.316 "zcopy": true, 00:07:04.316 "get_zone_info": false, 00:07:04.316 "zone_management": false, 00:07:04.316 "zone_append": false, 00:07:04.316 "compare": false, 00:07:04.316 "compare_and_write": false, 00:07:04.316 "abort": true, 00:07:04.316 "seek_hole": false, 00:07:04.316 "seek_data": false, 00:07:04.316 "copy": true, 00:07:04.316 "nvme_iov_md": false 00:07:04.316 }, 00:07:04.316 "memory_domains": [ 00:07:04.316 { 00:07:04.316 "dma_device_id": "system", 00:07:04.316 "dma_device_type": 1 00:07:04.316 }, 00:07:04.316 { 00:07:04.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.316 "dma_device_type": 2 00:07:04.316 } 00:07:04.316 ], 00:07:04.316 "driver_specific": { 00:07:04.316 "passthru": { 00:07:04.316 "name": "Passthru0", 00:07:04.316 "base_bdev_name": "Malloc0" 00:07:04.316 } 00:07:04.316 } 00:07:04.316 } 00:07:04.316 ]' 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.316 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:04.316 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:04.575 ************************************ 00:07:04.575 END TEST rpc_integrity 00:07:04.575 ************************************ 00:07:04.575 11:15:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:04.575 00:07:04.575 real 0m0.366s 00:07:04.575 user 0m0.221s 00:07:04.575 sys 0m0.045s 00:07:04.575 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 11:15:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:04.575 11:15:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.575 11:15:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.575 11:15:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 ************************************ 00:07:04.575 START TEST rpc_plugins 00:07:04.575 ************************************ 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:04.575 { 00:07:04.575 "name": "Malloc1", 00:07:04.575 "aliases": [ 00:07:04.575 "c47ba2af-aa4d-446f-9146-b19937baa527" 00:07:04.575 ], 00:07:04.575 "product_name": "Malloc disk", 00:07:04.575 "block_size": 4096, 00:07:04.575 "num_blocks": 256, 00:07:04.575 "uuid": "c47ba2af-aa4d-446f-9146-b19937baa527", 00:07:04.575 "assigned_rate_limits": { 00:07:04.575 "rw_ios_per_sec": 0, 00:07:04.575 "rw_mbytes_per_sec": 0, 00:07:04.575 "r_mbytes_per_sec": 0, 00:07:04.575 "w_mbytes_per_sec": 0 00:07:04.575 }, 00:07:04.575 "claimed": false, 00:07:04.575 "zoned": false, 00:07:04.575 "supported_io_types": { 00:07:04.575 "read": true, 00:07:04.575 "write": true, 00:07:04.575 "unmap": true, 00:07:04.575 "flush": true, 00:07:04.575 "reset": true, 00:07:04.575 "nvme_admin": false, 00:07:04.575 "nvme_io": false, 00:07:04.575 "nvme_io_md": false, 00:07:04.575 "write_zeroes": true, 00:07:04.575 "zcopy": true, 00:07:04.575 "get_zone_info": false, 00:07:04.575 "zone_management": false, 00:07:04.575 "zone_append": false, 00:07:04.575 "compare": false, 00:07:04.575 "compare_and_write": false, 00:07:04.575 "abort": true, 00:07:04.575 "seek_hole": false, 00:07:04.575 "seek_data": false, 00:07:04.575 "copy": true, 00:07:04.575 "nvme_iov_md": false 00:07:04.575 }, 00:07:04.575 "memory_domains": [ 00:07:04.575 { 00:07:04.575 "dma_device_id": "system", 00:07:04.575 "dma_device_type": 1 00:07:04.575 }, 00:07:04.575 { 00:07:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.575 "dma_device_type": 2 00:07:04.575 } 00:07:04.575 ], 00:07:04.575 "driver_specific": {} 00:07:04.575 } 00:07:04.575 ]' 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:04.575 ************************************ 00:07:04.575 END TEST rpc_plugins 00:07:04.575 ************************************ 00:07:04.575 11:15:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:04.575 00:07:04.575 real 0m0.166s 00:07:04.575 user 0m0.103s 00:07:04.575 sys 0m0.020s 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.575 11:15:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.835 11:15:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:04.835 11:15:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.835 11:15:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.835 11:15:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.835 ************************************ 00:07:04.835 START TEST rpc_trace_cmd_test 00:07:04.835 ************************************ 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:04.835 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58299", 00:07:04.835 "tpoint_group_mask": "0x8", 00:07:04.835 "iscsi_conn": { 00:07:04.835 "mask": "0x2", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "scsi": { 00:07:04.835 "mask": "0x4", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "bdev": { 00:07:04.835 "mask": "0x8", 00:07:04.835 "tpoint_mask": "0xffffffffffffffff" 00:07:04.835 }, 00:07:04.835 "nvmf_rdma": { 00:07:04.835 "mask": "0x10", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "nvmf_tcp": { 00:07:04.835 "mask": "0x20", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "ftl": { 00:07:04.835 "mask": "0x40", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "blobfs": { 00:07:04.835 "mask": "0x80", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "dsa": { 00:07:04.835 "mask": "0x200", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "thread": { 00:07:04.835 "mask": "0x400", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "nvme_pcie": { 00:07:04.835 "mask": "0x800", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "iaa": { 00:07:04.835 "mask": "0x1000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "nvme_tcp": { 00:07:04.835 "mask": "0x2000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "bdev_nvme": { 00:07:04.835 "mask": "0x4000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "sock": { 00:07:04.835 "mask": "0x8000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "blob": { 00:07:04.835 "mask": "0x10000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "bdev_raid": { 00:07:04.835 "mask": "0x20000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 }, 00:07:04.835 "scheduler": { 00:07:04.835 "mask": "0x40000", 00:07:04.835 "tpoint_mask": "0x0" 00:07:04.835 } 00:07:04.835 }' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:04.835 11:15:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:05.093 11:15:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:05.093 11:15:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:05.093 ************************************ 00:07:05.093 END TEST rpc_trace_cmd_test 00:07:05.093 ************************************ 00:07:05.093 11:15:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:05.093 00:07:05.093 real 0m0.308s 00:07:05.093 user 0m0.266s 00:07:05.093 sys 0m0.032s 00:07:05.093 11:15:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.093 11:15:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.093 11:15:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:05.093 11:15:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:05.093 11:15:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:05.093 11:15:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.093 11:15:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.093 11:15:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.093 ************************************ 00:07:05.093 START TEST rpc_daemon_integrity 00:07:05.093 ************************************ 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:05.093 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:05.094 { 00:07:05.094 "name": "Malloc2", 00:07:05.094 "aliases": [ 00:07:05.094 "a42038d0-e81b-4629-93b1-71ce275263e0" 00:07:05.094 ], 00:07:05.094 "product_name": "Malloc disk", 00:07:05.094 "block_size": 512, 00:07:05.094 "num_blocks": 16384, 00:07:05.094 "uuid": "a42038d0-e81b-4629-93b1-71ce275263e0", 00:07:05.094 "assigned_rate_limits": { 00:07:05.094 "rw_ios_per_sec": 0, 00:07:05.094 "rw_mbytes_per_sec": 0, 00:07:05.094 "r_mbytes_per_sec": 0, 00:07:05.094 "w_mbytes_per_sec": 0 00:07:05.094 }, 00:07:05.094 "claimed": false, 00:07:05.094 "zoned": false, 00:07:05.094 "supported_io_types": { 00:07:05.094 "read": true, 00:07:05.094 "write": true, 00:07:05.094 "unmap": true, 00:07:05.094 "flush": true, 00:07:05.094 "reset": true, 00:07:05.094 "nvme_admin": false, 00:07:05.094 "nvme_io": false, 00:07:05.094 "nvme_io_md": false, 00:07:05.094 "write_zeroes": true, 00:07:05.094 "zcopy": true, 00:07:05.094 "get_zone_info": false, 00:07:05.094 "zone_management": false, 00:07:05.094 "zone_append": false, 00:07:05.094 "compare": false, 00:07:05.094 "compare_and_write": false, 00:07:05.094 "abort": true, 00:07:05.094 "seek_hole": false, 00:07:05.094 "seek_data": false, 00:07:05.094 "copy": true, 00:07:05.094 "nvme_iov_md": false 00:07:05.094 }, 00:07:05.094 "memory_domains": [ 00:07:05.094 { 00:07:05.094 "dma_device_id": "system", 00:07:05.094 "dma_device_type": 1 00:07:05.094 }, 00:07:05.094 { 00:07:05.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.094 "dma_device_type": 2 00:07:05.094 } 00:07:05.094 ], 00:07:05.094 "driver_specific": {} 00:07:05.094 } 00:07:05.094 ]' 00:07:05.094 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.352 [2024-12-10 11:15:27.294514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:05.352 [2024-12-10 11:15:27.294623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.352 [2024-12-10 11:15:27.294685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:05.352 [2024-12-10 11:15:27.294710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.352 [2024-12-10 11:15:27.297841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.352 [2024-12-10 11:15:27.297909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:05.352 Passthru0 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.352 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:05.353 { 00:07:05.353 "name": "Malloc2", 00:07:05.353 "aliases": [ 00:07:05.353 "a42038d0-e81b-4629-93b1-71ce275263e0" 00:07:05.353 ], 00:07:05.353 "product_name": "Malloc disk", 00:07:05.353 "block_size": 512, 00:07:05.353 "num_blocks": 16384, 00:07:05.353 "uuid": "a42038d0-e81b-4629-93b1-71ce275263e0", 00:07:05.353 "assigned_rate_limits": { 00:07:05.353 "rw_ios_per_sec": 0, 00:07:05.353 "rw_mbytes_per_sec": 0, 00:07:05.353 "r_mbytes_per_sec": 0, 00:07:05.353 "w_mbytes_per_sec": 0 00:07:05.353 }, 00:07:05.353 "claimed": true, 00:07:05.353 "claim_type": "exclusive_write", 00:07:05.353 "zoned": false, 00:07:05.353 "supported_io_types": { 00:07:05.353 "read": true, 00:07:05.353 "write": true, 00:07:05.353 "unmap": true, 00:07:05.353 "flush": true, 00:07:05.353 "reset": true, 00:07:05.353 "nvme_admin": false, 00:07:05.353 "nvme_io": false, 00:07:05.353 "nvme_io_md": false, 00:07:05.353 "write_zeroes": true, 00:07:05.353 "zcopy": true, 00:07:05.353 "get_zone_info": false, 00:07:05.353 "zone_management": false, 00:07:05.353 "zone_append": false, 00:07:05.353 "compare": false, 00:07:05.353 "compare_and_write": false, 00:07:05.353 "abort": true, 00:07:05.353 "seek_hole": false, 00:07:05.353 "seek_data": false, 00:07:05.353 "copy": true, 00:07:05.353 "nvme_iov_md": false 00:07:05.353 }, 00:07:05.353 "memory_domains": [ 00:07:05.353 { 00:07:05.353 "dma_device_id": "system", 00:07:05.353 "dma_device_type": 1 00:07:05.353 }, 00:07:05.353 { 00:07:05.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.353 "dma_device_type": 2 00:07:05.353 } 00:07:05.353 ], 00:07:05.353 "driver_specific": {} 00:07:05.353 }, 00:07:05.353 { 00:07:05.353 "name": "Passthru0", 00:07:05.353 "aliases": [ 00:07:05.353 "d6486686-982d-5a21-8c6a-32e960dc5e62" 00:07:05.353 ], 00:07:05.353 "product_name": "passthru", 00:07:05.353 "block_size": 512, 00:07:05.353 "num_blocks": 16384, 00:07:05.353 "uuid": "d6486686-982d-5a21-8c6a-32e960dc5e62", 00:07:05.353 "assigned_rate_limits": { 00:07:05.353 "rw_ios_per_sec": 0, 00:07:05.353 "rw_mbytes_per_sec": 0, 00:07:05.353 "r_mbytes_per_sec": 0, 00:07:05.353 "w_mbytes_per_sec": 0 00:07:05.353 }, 00:07:05.353 "claimed": false, 00:07:05.353 "zoned": false, 00:07:05.353 "supported_io_types": { 00:07:05.353 "read": true, 00:07:05.353 "write": true, 00:07:05.353 "unmap": true, 00:07:05.353 "flush": true, 00:07:05.353 "reset": true, 00:07:05.353 "nvme_admin": false, 00:07:05.353 "nvme_io": false, 00:07:05.353 "nvme_io_md": false, 00:07:05.353 "write_zeroes": true, 00:07:05.353 "zcopy": true, 00:07:05.353 "get_zone_info": false, 00:07:05.353 "zone_management": false, 00:07:05.353 "zone_append": false, 00:07:05.353 "compare": false, 00:07:05.353 "compare_and_write": false, 00:07:05.353 "abort": true, 00:07:05.353 "seek_hole": false, 00:07:05.353 "seek_data": false, 00:07:05.353 "copy": true, 00:07:05.353 "nvme_iov_md": false 00:07:05.353 }, 00:07:05.353 "memory_domains": [ 00:07:05.353 { 00:07:05.353 "dma_device_id": "system", 00:07:05.353 "dma_device_type": 1 00:07:05.353 }, 00:07:05.353 { 00:07:05.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.353 "dma_device_type": 2 00:07:05.353 } 00:07:05.353 ], 00:07:05.353 "driver_specific": { 00:07:05.353 "passthru": { 00:07:05.353 "name": "Passthru0", 00:07:05.353 "base_bdev_name": "Malloc2" 00:07:05.353 } 00:07:05.353 } 00:07:05.353 } 00:07:05.353 ]' 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:05.353 ************************************ 00:07:05.353 END TEST rpc_daemon_integrity 00:07:05.353 ************************************ 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:05.353 00:07:05.353 real 0m0.354s 00:07:05.353 user 0m0.223s 00:07:05.353 sys 0m0.042s 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.353 11:15:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.611 11:15:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:05.611 11:15:27 rpc -- rpc/rpc.sh@84 -- # killprocess 58299 00:07:05.611 11:15:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 58299 ']' 00:07:05.611 11:15:27 rpc -- common/autotest_common.sh@958 -- # kill -0 58299 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@959 -- # uname 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58299 00:07:05.612 killing process with pid 58299 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58299' 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@973 -- # kill 58299 00:07:05.612 11:15:27 rpc -- common/autotest_common.sh@978 -- # wait 58299 00:07:08.142 ************************************ 00:07:08.142 END TEST rpc 00:07:08.142 ************************************ 00:07:08.142 00:07:08.142 real 0m5.138s 00:07:08.142 user 0m6.035s 00:07:08.142 sys 0m0.801s 00:07:08.142 11:15:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.142 11:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.142 11:15:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:08.142 11:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.142 11:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.142 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.142 ************************************ 00:07:08.142 START TEST skip_rpc 00:07:08.142 ************************************ 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:08.142 * Looking for test storage... 00:07:08.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.142 11:15:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:08.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.142 --rc genhtml_branch_coverage=1 00:07:08.142 --rc genhtml_function_coverage=1 00:07:08.142 --rc genhtml_legend=1 00:07:08.142 --rc geninfo_all_blocks=1 00:07:08.142 --rc geninfo_unexecuted_blocks=1 00:07:08.142 00:07:08.142 ' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:08.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.142 --rc genhtml_branch_coverage=1 00:07:08.142 --rc genhtml_function_coverage=1 00:07:08.142 --rc genhtml_legend=1 00:07:08.142 --rc geninfo_all_blocks=1 00:07:08.142 --rc geninfo_unexecuted_blocks=1 00:07:08.142 00:07:08.142 ' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:08.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.142 --rc genhtml_branch_coverage=1 00:07:08.142 --rc genhtml_function_coverage=1 00:07:08.142 --rc genhtml_legend=1 00:07:08.142 --rc geninfo_all_blocks=1 00:07:08.142 --rc geninfo_unexecuted_blocks=1 00:07:08.142 00:07:08.142 ' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:08.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.142 --rc genhtml_branch_coverage=1 00:07:08.142 --rc genhtml_function_coverage=1 00:07:08.142 --rc genhtml_legend=1 00:07:08.142 --rc geninfo_all_blocks=1 00:07:08.142 --rc geninfo_unexecuted_blocks=1 00:07:08.142 00:07:08.142 ' 00:07:08.142 11:15:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:08.142 11:15:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:08.142 11:15:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.142 11:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.142 ************************************ 00:07:08.142 START TEST skip_rpc 00:07:08.142 ************************************ 00:07:08.142 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:08.142 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58528 00:07:08.142 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:08.143 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.143 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:08.143 [2024-12-10 11:15:30.135236] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:08.143 [2024-12-10 11:15:30.135414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58528 ] 00:07:08.401 [2024-12-10 11:15:30.311820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.401 [2024-12-10 11:15:30.441073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:13.684 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58528 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58528 ']' 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58528 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58528 00:07:13.684 killing process with pid 58528 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58528' 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58528 00:07:13.684 11:15:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58528 00:07:15.058 00:07:15.058 real 0m7.115s 00:07:15.058 user 0m6.659s 00:07:15.058 sys 0m0.351s 00:07:15.058 11:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.058 ************************************ 00:07:15.058 END TEST skip_rpc 00:07:15.058 ************************************ 00:07:15.058 11:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.059 11:15:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:15.059 11:15:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.059 11:15:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.059 11:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.059 ************************************ 00:07:15.059 START TEST skip_rpc_with_json 00:07:15.059 ************************************ 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:15.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58632 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58632 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58632 ']' 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.059 11:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.317 [2024-12-10 11:15:37.282075] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:15.317 [2024-12-10 11:15:37.282270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:07:15.317 [2024-12-10 11:15:37.467050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.576 [2024-12-10 11:15:37.584739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:16.513 [2024-12-10 11:15:38.410304] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:16.513 request: 00:07:16.513 { 00:07:16.513 "trtype": "tcp", 00:07:16.513 "method": "nvmf_get_transports", 00:07:16.513 "req_id": 1 00:07:16.513 } 00:07:16.513 Got JSON-RPC error response 00:07:16.513 response: 00:07:16.513 { 00:07:16.513 "code": -19, 00:07:16.513 "message": "No such device" 00:07:16.513 } 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:16.513 [2024-12-10 11:15:38.422501] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.513 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:16.513 { 00:07:16.513 "subsystems": [ 00:07:16.513 { 00:07:16.513 "subsystem": "fsdev", 00:07:16.513 "config": [ 00:07:16.513 { 00:07:16.513 "method": "fsdev_set_opts", 00:07:16.513 "params": { 00:07:16.513 "fsdev_io_pool_size": 65535, 00:07:16.513 "fsdev_io_cache_size": 256 00:07:16.513 } 00:07:16.513 } 00:07:16.513 ] 00:07:16.513 }, 00:07:16.513 { 00:07:16.513 "subsystem": "keyring", 00:07:16.513 "config": [] 00:07:16.513 }, 00:07:16.513 { 00:07:16.513 "subsystem": "iobuf", 00:07:16.513 "config": [ 00:07:16.513 { 00:07:16.513 "method": "iobuf_set_options", 00:07:16.513 "params": { 00:07:16.514 "small_pool_count": 8192, 00:07:16.514 "large_pool_count": 1024, 00:07:16.514 "small_bufsize": 8192, 00:07:16.514 "large_bufsize": 135168, 00:07:16.514 "enable_numa": false 00:07:16.514 } 00:07:16.514 } 00:07:16.514 ] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "sock", 00:07:16.514 "config": [ 00:07:16.514 { 00:07:16.514 "method": "sock_set_default_impl", 00:07:16.514 "params": { 00:07:16.514 "impl_name": "posix" 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "sock_impl_set_options", 00:07:16.514 "params": { 00:07:16.514 "impl_name": "ssl", 00:07:16.514 "recv_buf_size": 4096, 00:07:16.514 "send_buf_size": 4096, 00:07:16.514 "enable_recv_pipe": true, 00:07:16.514 "enable_quickack": false, 00:07:16.514 "enable_placement_id": 0, 00:07:16.514 "enable_zerocopy_send_server": true, 00:07:16.514 "enable_zerocopy_send_client": false, 00:07:16.514 "zerocopy_threshold": 0, 00:07:16.514 "tls_version": 0, 00:07:16.514 "enable_ktls": false 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "sock_impl_set_options", 00:07:16.514 "params": { 00:07:16.514 "impl_name": "posix", 00:07:16.514 "recv_buf_size": 2097152, 00:07:16.514 "send_buf_size": 2097152, 00:07:16.514 "enable_recv_pipe": true, 00:07:16.514 "enable_quickack": false, 00:07:16.514 "enable_placement_id": 0, 00:07:16.514 "enable_zerocopy_send_server": true, 00:07:16.514 "enable_zerocopy_send_client": false, 00:07:16.514 "zerocopy_threshold": 0, 00:07:16.514 "tls_version": 0, 00:07:16.514 "enable_ktls": false 00:07:16.514 } 00:07:16.514 } 00:07:16.514 ] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "vmd", 00:07:16.514 "config": [] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "accel", 00:07:16.514 "config": [ 00:07:16.514 { 00:07:16.514 "method": "accel_set_options", 00:07:16.514 "params": { 00:07:16.514 "small_cache_size": 128, 00:07:16.514 "large_cache_size": 16, 00:07:16.514 "task_count": 2048, 00:07:16.514 "sequence_count": 2048, 00:07:16.514 "buf_count": 2048 00:07:16.514 } 00:07:16.514 } 00:07:16.514 ] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "bdev", 00:07:16.514 "config": [ 00:07:16.514 { 00:07:16.514 "method": "bdev_set_options", 00:07:16.514 "params": { 00:07:16.514 "bdev_io_pool_size": 65535, 00:07:16.514 "bdev_io_cache_size": 256, 00:07:16.514 "bdev_auto_examine": true, 00:07:16.514 "iobuf_small_cache_size": 128, 00:07:16.514 "iobuf_large_cache_size": 16 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "bdev_raid_set_options", 00:07:16.514 "params": { 00:07:16.514 "process_window_size_kb": 1024, 00:07:16.514 "process_max_bandwidth_mb_sec": 0 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "bdev_iscsi_set_options", 00:07:16.514 "params": { 00:07:16.514 "timeout_sec": 30 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "bdev_nvme_set_options", 00:07:16.514 "params": { 00:07:16.514 "action_on_timeout": "none", 00:07:16.514 "timeout_us": 0, 00:07:16.514 "timeout_admin_us": 0, 00:07:16.514 "keep_alive_timeout_ms": 10000, 00:07:16.514 "arbitration_burst": 0, 00:07:16.514 "low_priority_weight": 0, 00:07:16.514 "medium_priority_weight": 0, 00:07:16.514 "high_priority_weight": 0, 00:07:16.514 "nvme_adminq_poll_period_us": 10000, 00:07:16.514 "nvme_ioq_poll_period_us": 0, 00:07:16.514 "io_queue_requests": 0, 00:07:16.514 "delay_cmd_submit": true, 00:07:16.514 "transport_retry_count": 4, 00:07:16.514 "bdev_retry_count": 3, 00:07:16.514 "transport_ack_timeout": 0, 00:07:16.514 "ctrlr_loss_timeout_sec": 0, 00:07:16.514 "reconnect_delay_sec": 0, 00:07:16.514 "fast_io_fail_timeout_sec": 0, 00:07:16.514 "disable_auto_failback": false, 00:07:16.514 "generate_uuids": false, 00:07:16.514 "transport_tos": 0, 00:07:16.514 "nvme_error_stat": false, 00:07:16.514 "rdma_srq_size": 0, 00:07:16.514 "io_path_stat": false, 00:07:16.514 "allow_accel_sequence": false, 00:07:16.514 "rdma_max_cq_size": 0, 00:07:16.514 "rdma_cm_event_timeout_ms": 0, 00:07:16.514 "dhchap_digests": [ 00:07:16.514 "sha256", 00:07:16.514 "sha384", 00:07:16.514 "sha512" 00:07:16.514 ], 00:07:16.514 "dhchap_dhgroups": [ 00:07:16.514 "null", 00:07:16.514 "ffdhe2048", 00:07:16.514 "ffdhe3072", 00:07:16.514 "ffdhe4096", 00:07:16.514 "ffdhe6144", 00:07:16.514 "ffdhe8192" 00:07:16.514 ] 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "bdev_nvme_set_hotplug", 00:07:16.514 "params": { 00:07:16.514 "period_us": 100000, 00:07:16.514 "enable": false 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "bdev_wait_for_examine" 00:07:16.514 } 00:07:16.514 ] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "scsi", 00:07:16.514 "config": null 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "scheduler", 00:07:16.514 "config": [ 00:07:16.514 { 00:07:16.514 "method": "framework_set_scheduler", 00:07:16.514 "params": { 00:07:16.514 "name": "static" 00:07:16.514 } 00:07:16.514 } 00:07:16.514 ] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "vhost_scsi", 00:07:16.514 "config": [] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "vhost_blk", 00:07:16.514 "config": [] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "ublk", 00:07:16.514 "config": [] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "nbd", 00:07:16.514 "config": [] 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "subsystem": "nvmf", 00:07:16.514 "config": [ 00:07:16.514 { 00:07:16.514 "method": "nvmf_set_config", 00:07:16.514 "params": { 00:07:16.514 "discovery_filter": "match_any", 00:07:16.514 "admin_cmd_passthru": { 00:07:16.514 "identify_ctrlr": false 00:07:16.514 }, 00:07:16.514 "dhchap_digests": [ 00:07:16.514 "sha256", 00:07:16.514 "sha384", 00:07:16.514 "sha512" 00:07:16.514 ], 00:07:16.514 "dhchap_dhgroups": [ 00:07:16.514 "null", 00:07:16.514 "ffdhe2048", 00:07:16.514 "ffdhe3072", 00:07:16.514 "ffdhe4096", 00:07:16.514 "ffdhe6144", 00:07:16.514 "ffdhe8192" 00:07:16.514 ] 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "nvmf_set_max_subsystems", 00:07:16.514 "params": { 00:07:16.514 "max_subsystems": 1024 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "nvmf_set_crdt", 00:07:16.514 "params": { 00:07:16.514 "crdt1": 0, 00:07:16.514 "crdt2": 0, 00:07:16.514 "crdt3": 0 00:07:16.514 } 00:07:16.514 }, 00:07:16.514 { 00:07:16.514 "method": "nvmf_create_transport", 00:07:16.514 "params": { 00:07:16.514 "trtype": "TCP", 00:07:16.514 "max_queue_depth": 128, 00:07:16.514 "max_io_qpairs_per_ctrlr": 127, 00:07:16.514 "in_capsule_data_size": 4096, 00:07:16.514 "max_io_size": 131072, 00:07:16.514 "io_unit_size": 131072, 00:07:16.514 "max_aq_depth": 128, 00:07:16.514 "num_shared_buffers": 511, 00:07:16.514 "buf_cache_size": 4294967295, 00:07:16.514 "dif_insert_or_strip": false, 00:07:16.514 "zcopy": false, 00:07:16.514 "c2h_success": true, 00:07:16.514 "sock_priority": 0, 00:07:16.515 "abort_timeout_sec": 1, 00:07:16.515 "ack_timeout": 0, 00:07:16.515 "data_wr_pool_size": 0 00:07:16.515 } 00:07:16.515 } 00:07:16.515 ] 00:07:16.515 }, 00:07:16.515 { 00:07:16.515 "subsystem": "iscsi", 00:07:16.515 "config": [ 00:07:16.515 { 00:07:16.515 "method": "iscsi_set_options", 00:07:16.515 "params": { 00:07:16.515 "node_base": "iqn.2016-06.io.spdk", 00:07:16.515 "max_sessions": 128, 00:07:16.515 "max_connections_per_session": 2, 00:07:16.515 "max_queue_depth": 64, 00:07:16.515 "default_time2wait": 2, 00:07:16.515 "default_time2retain": 20, 00:07:16.515 "first_burst_length": 8192, 00:07:16.515 "immediate_data": true, 00:07:16.515 "allow_duplicated_isid": false, 00:07:16.515 "error_recovery_level": 0, 00:07:16.515 "nop_timeout": 60, 00:07:16.515 "nop_in_interval": 30, 00:07:16.515 "disable_chap": false, 00:07:16.515 "require_chap": false, 00:07:16.515 "mutual_chap": false, 00:07:16.515 "chap_group": 0, 00:07:16.515 "max_large_datain_per_connection": 64, 00:07:16.515 "max_r2t_per_connection": 4, 00:07:16.515 "pdu_pool_size": 36864, 00:07:16.515 "immediate_data_pool_size": 16384, 00:07:16.515 "data_out_pool_size": 2048 00:07:16.515 } 00:07:16.515 } 00:07:16.515 ] 00:07:16.515 } 00:07:16.515 ] 00:07:16.515 } 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58632 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58632 ']' 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58632 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58632 00:07:16.515 killing process with pid 58632 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58632' 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58632 00:07:16.515 11:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58632 00:07:19.054 11:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58677 00:07:19.054 11:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:19.054 11:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58677 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58677 ']' 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58677 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58677 00:07:24.319 killing process with pid 58677 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58677' 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58677 00:07:24.319 11:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58677 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:26.221 ************************************ 00:07:26.221 END TEST skip_rpc_with_json 00:07:26.221 ************************************ 00:07:26.221 00:07:26.221 real 0m10.754s 00:07:26.221 user 0m10.489s 00:07:26.221 sys 0m0.753s 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.221 11:15:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:26.221 11:15:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.221 11:15:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.221 11:15:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.221 ************************************ 00:07:26.221 START TEST skip_rpc_with_delay 00:07:26.221 ************************************ 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:26.221 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:26.222 11:15:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:26.222 [2024-12-10 11:15:48.090000] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.222 ************************************ 00:07:26.222 END TEST skip_rpc_with_delay 00:07:26.222 ************************************ 00:07:26.222 00:07:26.222 real 0m0.200s 00:07:26.222 user 0m0.112s 00:07:26.222 sys 0m0.085s 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.222 11:15:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:26.222 11:15:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:26.222 11:15:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:26.222 11:15:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:26.222 11:15:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.222 11:15:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.222 11:15:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.222 ************************************ 00:07:26.222 START TEST exit_on_failed_rpc_init 00:07:26.222 ************************************ 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58811 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58811 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58811 ']' 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.222 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:26.222 [2024-12-10 11:15:48.347471] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:26.222 [2024-12-10 11:15:48.347907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:07:26.480 [2024-12-10 11:15:48.535172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.739 [2024-12-10 11:15:48.661071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:27.672 11:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:27.672 [2024-12-10 11:15:49.601289] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:27.672 [2024-12-10 11:15:49.601467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58834 ] 00:07:27.672 [2024-12-10 11:15:49.786760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.931 [2024-12-10 11:15:49.916079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.931 [2024-12-10 11:15:49.916229] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:27.931 [2024-12-10 11:15:49.916257] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:27.931 [2024-12-10 11:15:49.916284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58811 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58811 ']' 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58811 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58811 00:07:28.188 killing process with pid 58811 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58811' 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58811 00:07:28.188 11:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58811 00:07:30.794 00:07:30.794 real 0m4.200s 00:07:30.794 user 0m4.752s 00:07:30.794 sys 0m0.576s 00:07:30.794 11:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.794 ************************************ 00:07:30.794 END TEST exit_on_failed_rpc_init 00:07:30.794 ************************************ 00:07:30.794 11:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:30.794 11:15:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:30.794 ************************************ 00:07:30.794 END TEST skip_rpc 00:07:30.794 00:07:30.794 real 0m22.668s 00:07:30.794 user 0m22.202s 00:07:30.794 sys 0m1.963s 00:07:30.794 11:15:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.794 11:15:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.794 ************************************ 00:07:30.794 11:15:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:30.794 11:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.794 11:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.794 11:15:52 -- common/autotest_common.sh@10 -- # set +x 00:07:30.794 ************************************ 00:07:30.794 START TEST rpc_client 00:07:30.794 ************************************ 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:30.794 * Looking for test storage... 00:07:30.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.794 11:15:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.794 --rc genhtml_branch_coverage=1 00:07:30.794 --rc genhtml_function_coverage=1 00:07:30.794 --rc genhtml_legend=1 00:07:30.794 --rc geninfo_all_blocks=1 00:07:30.794 --rc geninfo_unexecuted_blocks=1 00:07:30.794 00:07:30.794 ' 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.794 --rc genhtml_branch_coverage=1 00:07:30.794 --rc genhtml_function_coverage=1 00:07:30.794 --rc genhtml_legend=1 00:07:30.794 --rc geninfo_all_blocks=1 00:07:30.794 --rc geninfo_unexecuted_blocks=1 00:07:30.794 00:07:30.794 ' 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.794 --rc genhtml_branch_coverage=1 00:07:30.794 --rc genhtml_function_coverage=1 00:07:30.794 --rc genhtml_legend=1 00:07:30.794 --rc geninfo_all_blocks=1 00:07:30.794 --rc geninfo_unexecuted_blocks=1 00:07:30.794 00:07:30.794 ' 00:07:30.794 11:15:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.794 --rc genhtml_branch_coverage=1 00:07:30.794 --rc genhtml_function_coverage=1 00:07:30.794 --rc genhtml_legend=1 00:07:30.794 --rc geninfo_all_blocks=1 00:07:30.794 --rc geninfo_unexecuted_blocks=1 00:07:30.794 00:07:30.794 ' 00:07:30.795 11:15:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:30.795 OK 00:07:30.795 11:15:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:30.795 00:07:30.795 real 0m0.235s 00:07:30.795 user 0m0.156s 00:07:30.795 sys 0m0.086s 00:07:30.795 11:15:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.795 11:15:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:30.795 ************************************ 00:07:30.795 END TEST rpc_client 00:07:30.795 ************************************ 00:07:30.795 11:15:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:30.795 11:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.795 11:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.795 11:15:52 -- common/autotest_common.sh@10 -- # set +x 00:07:30.795 ************************************ 00:07:30.795 START TEST json_config 00:07:30.795 ************************************ 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.795 11:15:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.795 11:15:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.795 11:15:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.795 11:15:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.795 11:15:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.795 11:15:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:30.795 11:15:52 json_config -- scripts/common.sh@345 -- # : 1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.795 11:15:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.795 11:15:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@353 -- # local d=1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.795 11:15:52 json_config -- scripts/common.sh@355 -- # echo 1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.795 11:15:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@353 -- # local d=2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.795 11:15:52 json_config -- scripts/common.sh@355 -- # echo 2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.795 11:15:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.795 11:15:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.795 11:15:52 json_config -- scripts/common.sh@368 -- # return 0 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.795 --rc genhtml_branch_coverage=1 00:07:30.795 --rc genhtml_function_coverage=1 00:07:30.795 --rc genhtml_legend=1 00:07:30.795 --rc geninfo_all_blocks=1 00:07:30.795 --rc geninfo_unexecuted_blocks=1 00:07:30.795 00:07:30.795 ' 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.795 --rc genhtml_branch_coverage=1 00:07:30.795 --rc genhtml_function_coverage=1 00:07:30.795 --rc genhtml_legend=1 00:07:30.795 --rc geninfo_all_blocks=1 00:07:30.795 --rc geninfo_unexecuted_blocks=1 00:07:30.795 00:07:30.795 ' 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.795 --rc genhtml_branch_coverage=1 00:07:30.795 --rc genhtml_function_coverage=1 00:07:30.795 --rc genhtml_legend=1 00:07:30.795 --rc geninfo_all_blocks=1 00:07:30.795 --rc geninfo_unexecuted_blocks=1 00:07:30.795 00:07:30.795 ' 00:07:30.795 11:15:52 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.795 --rc genhtml_branch_coverage=1 00:07:30.795 --rc genhtml_function_coverage=1 00:07:30.795 --rc genhtml_legend=1 00:07:30.795 --rc geninfo_all_blocks=1 00:07:30.795 --rc geninfo_unexecuted_blocks=1 00:07:30.795 00:07:30.795 ' 00:07:30.795 11:15:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.795 11:15:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.055 11:15:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.055 11:15:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.055 11:15:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.055 11:15:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.055 11:15:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.055 11:15:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.055 11:15:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.055 11:15:52 json_config -- paths/export.sh@5 -- # export PATH 00:07:31.055 11:15:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@51 -- # : 0 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.055 11:15:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:31.055 WARNING: No tests are enabled so not running JSON configuration tests 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:31.055 11:15:52 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:31.055 ************************************ 00:07:31.055 END TEST json_config 00:07:31.055 ************************************ 00:07:31.055 00:07:31.055 real 0m0.186s 00:07:31.055 user 0m0.118s 00:07:31.055 sys 0m0.069s 00:07:31.055 11:15:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.055 11:15:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.055 11:15:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:31.055 11:15:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.055 11:15:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.055 11:15:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.055 ************************************ 00:07:31.055 START TEST json_config_extra_key 00:07:31.055 ************************************ 00:07:31.055 11:15:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:31.055 11:15:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.055 11:15:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.055 11:15:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.055 11:15:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.055 11:15:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.056 --rc genhtml_branch_coverage=1 00:07:31.056 --rc genhtml_function_coverage=1 00:07:31.056 --rc genhtml_legend=1 00:07:31.056 --rc geninfo_all_blocks=1 00:07:31.056 --rc geninfo_unexecuted_blocks=1 00:07:31.056 00:07:31.056 ' 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.056 --rc genhtml_branch_coverage=1 00:07:31.056 --rc genhtml_function_coverage=1 00:07:31.056 --rc genhtml_legend=1 00:07:31.056 --rc geninfo_all_blocks=1 00:07:31.056 --rc geninfo_unexecuted_blocks=1 00:07:31.056 00:07:31.056 ' 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.056 --rc genhtml_branch_coverage=1 00:07:31.056 --rc genhtml_function_coverage=1 00:07:31.056 --rc genhtml_legend=1 00:07:31.056 --rc geninfo_all_blocks=1 00:07:31.056 --rc geninfo_unexecuted_blocks=1 00:07:31.056 00:07:31.056 ' 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.056 --rc genhtml_branch_coverage=1 00:07:31.056 --rc genhtml_function_coverage=1 00:07:31.056 --rc genhtml_legend=1 00:07:31.056 --rc geninfo_all_blocks=1 00:07:31.056 --rc geninfo_unexecuted_blocks=1 00:07:31.056 00:07:31.056 ' 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7b0050-bbf1-48b8-acd4-61d22420e52c 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.056 11:15:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.056 11:15:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.056 11:15:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.056 11:15:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.056 11:15:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:31.056 11:15:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.056 11:15:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:31.056 INFO: launching applications... 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:31.056 11:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:31.056 Waiting for target to run... 00:07:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59034 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59034 /var/tmp/spdk_tgt.sock 00:07:31.056 11:15:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:31.056 11:15:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59034 ']' 00:07:31.057 11:15:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:31.057 11:15:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.057 11:15:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:31.057 11:15:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.057 11:15:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:31.315 [2024-12-10 11:15:53.330586] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:31.315 [2024-12-10 11:15:53.330785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59034 ] 00:07:31.574 [2024-12-10 11:15:53.697937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.832 [2024-12-10 11:15:53.824888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.399 00:07:32.399 INFO: shutting down applications... 00:07:32.399 11:15:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.399 11:15:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:32.399 11:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:32.399 11:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59034 ]] 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59034 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:32.399 11:15:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:32.966 11:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:32.966 11:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:32.966 11:15:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:32.966 11:15:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:33.532 11:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:33.532 11:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.532 11:15:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:33.532 11:15:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.099 11:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.099 11:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.099 11:15:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:34.099 11:15:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.665 11:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.665 11:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.665 11:15:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:34.665 11:15:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.923 11:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.923 11:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.923 11:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:34.923 11:15:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59034 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:35.491 SPDK target shutdown done 00:07:35.491 Success 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:35.491 11:15:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:35.491 11:15:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:35.491 ************************************ 00:07:35.491 END TEST json_config_extra_key 00:07:35.491 ************************************ 00:07:35.491 00:07:35.491 real 0m4.531s 00:07:35.491 user 0m4.044s 00:07:35.491 sys 0m0.478s 00:07:35.491 11:15:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.491 11:15:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:35.491 11:15:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:35.491 11:15:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.491 11:15:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.491 11:15:57 -- common/autotest_common.sh@10 -- # set +x 00:07:35.491 ************************************ 00:07:35.491 START TEST alias_rpc 00:07:35.491 ************************************ 00:07:35.491 11:15:57 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:35.750 * Looking for test storage... 00:07:35.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:35.750 11:15:57 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.750 11:15:57 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.750 11:15:57 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.750 11:15:57 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.750 11:15:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.751 11:15:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.751 --rc genhtml_branch_coverage=1 00:07:35.751 --rc genhtml_function_coverage=1 00:07:35.751 --rc genhtml_legend=1 00:07:35.751 --rc geninfo_all_blocks=1 00:07:35.751 --rc geninfo_unexecuted_blocks=1 00:07:35.751 00:07:35.751 ' 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.751 --rc genhtml_branch_coverage=1 00:07:35.751 --rc genhtml_function_coverage=1 00:07:35.751 --rc genhtml_legend=1 00:07:35.751 --rc geninfo_all_blocks=1 00:07:35.751 --rc geninfo_unexecuted_blocks=1 00:07:35.751 00:07:35.751 ' 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.751 --rc genhtml_branch_coverage=1 00:07:35.751 --rc genhtml_function_coverage=1 00:07:35.751 --rc genhtml_legend=1 00:07:35.751 --rc geninfo_all_blocks=1 00:07:35.751 --rc geninfo_unexecuted_blocks=1 00:07:35.751 00:07:35.751 ' 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.751 --rc genhtml_branch_coverage=1 00:07:35.751 --rc genhtml_function_coverage=1 00:07:35.751 --rc genhtml_legend=1 00:07:35.751 --rc geninfo_all_blocks=1 00:07:35.751 --rc geninfo_unexecuted_blocks=1 00:07:35.751 00:07:35.751 ' 00:07:35.751 11:15:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.751 11:15:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59150 00:07:35.751 11:15:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.751 11:15:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59150 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59150 ']' 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.751 11:15:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.751 [2024-12-10 11:15:57.913905] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:35.751 [2024-12-10 11:15:57.914751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59150 ] 00:07:36.010 [2024-12-10 11:15:58.095220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.268 [2024-12-10 11:15:58.220931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.203 11:15:59 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.203 11:15:59 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.203 11:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:37.461 11:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59150 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59150 ']' 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59150 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59150 00:07:37.461 killing process with pid 59150 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59150' 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 59150 00:07:37.461 11:15:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 59150 00:07:39.379 ************************************ 00:07:39.379 END TEST alias_rpc 00:07:39.379 ************************************ 00:07:39.379 00:07:39.379 real 0m3.942s 00:07:39.379 user 0m4.281s 00:07:39.379 sys 0m0.506s 00:07:39.379 11:16:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.379 11:16:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.638 11:16:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:39.638 11:16:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:39.638 11:16:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.638 11:16:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.638 11:16:01 -- common/autotest_common.sh@10 -- # set +x 00:07:39.638 ************************************ 00:07:39.638 START TEST spdkcli_tcp 00:07:39.638 ************************************ 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:39.638 * Looking for test storage... 00:07:39.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.638 11:16:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.638 --rc genhtml_branch_coverage=1 00:07:39.638 --rc genhtml_function_coverage=1 00:07:39.638 --rc genhtml_legend=1 00:07:39.638 --rc geninfo_all_blocks=1 00:07:39.638 --rc geninfo_unexecuted_blocks=1 00:07:39.638 00:07:39.638 ' 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.638 --rc genhtml_branch_coverage=1 00:07:39.638 --rc genhtml_function_coverage=1 00:07:39.638 --rc genhtml_legend=1 00:07:39.638 --rc geninfo_all_blocks=1 00:07:39.638 --rc geninfo_unexecuted_blocks=1 00:07:39.638 00:07:39.638 ' 00:07:39.638 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.638 --rc genhtml_branch_coverage=1 00:07:39.638 --rc genhtml_function_coverage=1 00:07:39.638 --rc genhtml_legend=1 00:07:39.638 --rc geninfo_all_blocks=1 00:07:39.638 --rc geninfo_unexecuted_blocks=1 00:07:39.638 00:07:39.638 ' 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.639 --rc genhtml_branch_coverage=1 00:07:39.639 --rc genhtml_function_coverage=1 00:07:39.639 --rc genhtml_legend=1 00:07:39.639 --rc geninfo_all_blocks=1 00:07:39.639 --rc geninfo_unexecuted_blocks=1 00:07:39.639 00:07:39.639 ' 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59252 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59252 00:07:39.639 11:16:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59252 ']' 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.639 11:16:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.897 [2024-12-10 11:16:01.895787] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:39.897 [2024-12-10 11:16:01.896794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59252 ] 00:07:40.155 [2024-12-10 11:16:02.083515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.155 [2024-12-10 11:16:02.222038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.155 [2024-12-10 11:16:02.222052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.090 11:16:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.090 11:16:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:41.090 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59276 00:07:41.090 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:41.090 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:41.350 [ 00:07:41.350 "bdev_malloc_delete", 00:07:41.350 "bdev_malloc_create", 00:07:41.350 "bdev_null_resize", 00:07:41.350 "bdev_null_delete", 00:07:41.350 "bdev_null_create", 00:07:41.350 "bdev_nvme_cuse_unregister", 00:07:41.350 "bdev_nvme_cuse_register", 00:07:41.350 "bdev_opal_new_user", 00:07:41.350 "bdev_opal_set_lock_state", 00:07:41.350 "bdev_opal_delete", 00:07:41.350 "bdev_opal_get_info", 00:07:41.350 "bdev_opal_create", 00:07:41.350 "bdev_nvme_opal_revert", 00:07:41.350 "bdev_nvme_opal_init", 00:07:41.350 "bdev_nvme_send_cmd", 00:07:41.350 "bdev_nvme_set_keys", 00:07:41.350 "bdev_nvme_get_path_iostat", 00:07:41.350 "bdev_nvme_get_mdns_discovery_info", 00:07:41.350 "bdev_nvme_stop_mdns_discovery", 00:07:41.350 "bdev_nvme_start_mdns_discovery", 00:07:41.350 "bdev_nvme_set_multipath_policy", 00:07:41.350 "bdev_nvme_set_preferred_path", 00:07:41.350 "bdev_nvme_get_io_paths", 00:07:41.350 "bdev_nvme_remove_error_injection", 00:07:41.350 "bdev_nvme_add_error_injection", 00:07:41.350 "bdev_nvme_get_discovery_info", 00:07:41.350 "bdev_nvme_stop_discovery", 00:07:41.350 "bdev_nvme_start_discovery", 00:07:41.350 "bdev_nvme_get_controller_health_info", 00:07:41.350 "bdev_nvme_disable_controller", 00:07:41.350 "bdev_nvme_enable_controller", 00:07:41.350 "bdev_nvme_reset_controller", 00:07:41.350 "bdev_nvme_get_transport_statistics", 00:07:41.350 "bdev_nvme_apply_firmware", 00:07:41.350 "bdev_nvme_detach_controller", 00:07:41.350 "bdev_nvme_get_controllers", 00:07:41.350 "bdev_nvme_attach_controller", 00:07:41.350 "bdev_nvme_set_hotplug", 00:07:41.350 "bdev_nvme_set_options", 00:07:41.350 "bdev_passthru_delete", 00:07:41.350 "bdev_passthru_create", 00:07:41.350 "bdev_lvol_set_parent_bdev", 00:07:41.350 "bdev_lvol_set_parent", 00:07:41.350 "bdev_lvol_check_shallow_copy", 00:07:41.350 "bdev_lvol_start_shallow_copy", 00:07:41.350 "bdev_lvol_grow_lvstore", 00:07:41.350 "bdev_lvol_get_lvols", 00:07:41.350 "bdev_lvol_get_lvstores", 00:07:41.350 "bdev_lvol_delete", 00:07:41.350 "bdev_lvol_set_read_only", 00:07:41.350 "bdev_lvol_resize", 00:07:41.350 "bdev_lvol_decouple_parent", 00:07:41.350 "bdev_lvol_inflate", 00:07:41.350 "bdev_lvol_rename", 00:07:41.350 "bdev_lvol_clone_bdev", 00:07:41.350 "bdev_lvol_clone", 00:07:41.351 "bdev_lvol_snapshot", 00:07:41.351 "bdev_lvol_create", 00:07:41.351 "bdev_lvol_delete_lvstore", 00:07:41.351 "bdev_lvol_rename_lvstore", 00:07:41.351 "bdev_lvol_create_lvstore", 00:07:41.351 "bdev_raid_set_options", 00:07:41.351 "bdev_raid_remove_base_bdev", 00:07:41.351 "bdev_raid_add_base_bdev", 00:07:41.351 "bdev_raid_delete", 00:07:41.351 "bdev_raid_create", 00:07:41.351 "bdev_raid_get_bdevs", 00:07:41.351 "bdev_error_inject_error", 00:07:41.351 "bdev_error_delete", 00:07:41.351 "bdev_error_create", 00:07:41.351 "bdev_split_delete", 00:07:41.351 "bdev_split_create", 00:07:41.351 "bdev_delay_delete", 00:07:41.351 "bdev_delay_create", 00:07:41.351 "bdev_delay_update_latency", 00:07:41.351 "bdev_zone_block_delete", 00:07:41.351 "bdev_zone_block_create", 00:07:41.351 "blobfs_create", 00:07:41.351 "blobfs_detect", 00:07:41.351 "blobfs_set_cache_size", 00:07:41.351 "bdev_xnvme_delete", 00:07:41.351 "bdev_xnvme_create", 00:07:41.351 "bdev_aio_delete", 00:07:41.351 "bdev_aio_rescan", 00:07:41.351 "bdev_aio_create", 00:07:41.351 "bdev_ftl_set_property", 00:07:41.351 "bdev_ftl_get_properties", 00:07:41.351 "bdev_ftl_get_stats", 00:07:41.351 "bdev_ftl_unmap", 00:07:41.351 "bdev_ftl_unload", 00:07:41.351 "bdev_ftl_delete", 00:07:41.351 "bdev_ftl_load", 00:07:41.351 "bdev_ftl_create", 00:07:41.351 "bdev_virtio_attach_controller", 00:07:41.351 "bdev_virtio_scsi_get_devices", 00:07:41.351 "bdev_virtio_detach_controller", 00:07:41.351 "bdev_virtio_blk_set_hotplug", 00:07:41.351 "bdev_iscsi_delete", 00:07:41.351 "bdev_iscsi_create", 00:07:41.351 "bdev_iscsi_set_options", 00:07:41.351 "accel_error_inject_error", 00:07:41.351 "ioat_scan_accel_module", 00:07:41.351 "dsa_scan_accel_module", 00:07:41.351 "iaa_scan_accel_module", 00:07:41.351 "keyring_file_remove_key", 00:07:41.351 "keyring_file_add_key", 00:07:41.351 "keyring_linux_set_options", 00:07:41.351 "fsdev_aio_delete", 00:07:41.351 "fsdev_aio_create", 00:07:41.351 "iscsi_get_histogram", 00:07:41.351 "iscsi_enable_histogram", 00:07:41.351 "iscsi_set_options", 00:07:41.351 "iscsi_get_auth_groups", 00:07:41.351 "iscsi_auth_group_remove_secret", 00:07:41.351 "iscsi_auth_group_add_secret", 00:07:41.351 "iscsi_delete_auth_group", 00:07:41.351 "iscsi_create_auth_group", 00:07:41.351 "iscsi_set_discovery_auth", 00:07:41.351 "iscsi_get_options", 00:07:41.351 "iscsi_target_node_request_logout", 00:07:41.351 "iscsi_target_node_set_redirect", 00:07:41.351 "iscsi_target_node_set_auth", 00:07:41.351 "iscsi_target_node_add_lun", 00:07:41.351 "iscsi_get_stats", 00:07:41.351 "iscsi_get_connections", 00:07:41.351 "iscsi_portal_group_set_auth", 00:07:41.351 "iscsi_start_portal_group", 00:07:41.351 "iscsi_delete_portal_group", 00:07:41.351 "iscsi_create_portal_group", 00:07:41.351 "iscsi_get_portal_groups", 00:07:41.351 "iscsi_delete_target_node", 00:07:41.351 "iscsi_target_node_remove_pg_ig_maps", 00:07:41.351 "iscsi_target_node_add_pg_ig_maps", 00:07:41.351 "iscsi_create_target_node", 00:07:41.351 "iscsi_get_target_nodes", 00:07:41.351 "iscsi_delete_initiator_group", 00:07:41.351 "iscsi_initiator_group_remove_initiators", 00:07:41.351 "iscsi_initiator_group_add_initiators", 00:07:41.351 "iscsi_create_initiator_group", 00:07:41.351 "iscsi_get_initiator_groups", 00:07:41.351 "nvmf_set_crdt", 00:07:41.351 "nvmf_set_config", 00:07:41.351 "nvmf_set_max_subsystems", 00:07:41.351 "nvmf_stop_mdns_prr", 00:07:41.351 "nvmf_publish_mdns_prr", 00:07:41.351 "nvmf_subsystem_get_listeners", 00:07:41.351 "nvmf_subsystem_get_qpairs", 00:07:41.351 "nvmf_subsystem_get_controllers", 00:07:41.351 "nvmf_get_stats", 00:07:41.351 "nvmf_get_transports", 00:07:41.351 "nvmf_create_transport", 00:07:41.351 "nvmf_get_targets", 00:07:41.351 "nvmf_delete_target", 00:07:41.351 "nvmf_create_target", 00:07:41.351 "nvmf_subsystem_allow_any_host", 00:07:41.351 "nvmf_subsystem_set_keys", 00:07:41.351 "nvmf_subsystem_remove_host", 00:07:41.351 "nvmf_subsystem_add_host", 00:07:41.351 "nvmf_ns_remove_host", 00:07:41.351 "nvmf_ns_add_host", 00:07:41.351 "nvmf_subsystem_remove_ns", 00:07:41.351 "nvmf_subsystem_set_ns_ana_group", 00:07:41.351 "nvmf_subsystem_add_ns", 00:07:41.351 "nvmf_subsystem_listener_set_ana_state", 00:07:41.351 "nvmf_discovery_get_referrals", 00:07:41.351 "nvmf_discovery_remove_referral", 00:07:41.351 "nvmf_discovery_add_referral", 00:07:41.351 "nvmf_subsystem_remove_listener", 00:07:41.351 "nvmf_subsystem_add_listener", 00:07:41.351 "nvmf_delete_subsystem", 00:07:41.351 "nvmf_create_subsystem", 00:07:41.351 "nvmf_get_subsystems", 00:07:41.351 "env_dpdk_get_mem_stats", 00:07:41.351 "nbd_get_disks", 00:07:41.351 "nbd_stop_disk", 00:07:41.351 "nbd_start_disk", 00:07:41.351 "ublk_recover_disk", 00:07:41.351 "ublk_get_disks", 00:07:41.351 "ublk_stop_disk", 00:07:41.351 "ublk_start_disk", 00:07:41.351 "ublk_destroy_target", 00:07:41.351 "ublk_create_target", 00:07:41.351 "virtio_blk_create_transport", 00:07:41.351 "virtio_blk_get_transports", 00:07:41.351 "vhost_controller_set_coalescing", 00:07:41.351 "vhost_get_controllers", 00:07:41.351 "vhost_delete_controller", 00:07:41.351 "vhost_create_blk_controller", 00:07:41.351 "vhost_scsi_controller_remove_target", 00:07:41.351 "vhost_scsi_controller_add_target", 00:07:41.351 "vhost_start_scsi_controller", 00:07:41.351 "vhost_create_scsi_controller", 00:07:41.351 "thread_set_cpumask", 00:07:41.351 "scheduler_set_options", 00:07:41.351 "framework_get_governor", 00:07:41.351 "framework_get_scheduler", 00:07:41.351 "framework_set_scheduler", 00:07:41.351 "framework_get_reactors", 00:07:41.351 "thread_get_io_channels", 00:07:41.351 "thread_get_pollers", 00:07:41.351 "thread_get_stats", 00:07:41.351 "framework_monitor_context_switch", 00:07:41.351 "spdk_kill_instance", 00:07:41.351 "log_enable_timestamps", 00:07:41.351 "log_get_flags", 00:07:41.351 "log_clear_flag", 00:07:41.351 "log_set_flag", 00:07:41.351 "log_get_level", 00:07:41.351 "log_set_level", 00:07:41.351 "log_get_print_level", 00:07:41.351 "log_set_print_level", 00:07:41.351 "framework_enable_cpumask_locks", 00:07:41.351 "framework_disable_cpumask_locks", 00:07:41.351 "framework_wait_init", 00:07:41.351 "framework_start_init", 00:07:41.351 "scsi_get_devices", 00:07:41.351 "bdev_get_histogram", 00:07:41.351 "bdev_enable_histogram", 00:07:41.351 "bdev_set_qos_limit", 00:07:41.351 "bdev_set_qd_sampling_period", 00:07:41.351 "bdev_get_bdevs", 00:07:41.351 "bdev_reset_iostat", 00:07:41.351 "bdev_get_iostat", 00:07:41.351 "bdev_examine", 00:07:41.351 "bdev_wait_for_examine", 00:07:41.351 "bdev_set_options", 00:07:41.351 "accel_get_stats", 00:07:41.351 "accel_set_options", 00:07:41.351 "accel_set_driver", 00:07:41.351 "accel_crypto_key_destroy", 00:07:41.351 "accel_crypto_keys_get", 00:07:41.351 "accel_crypto_key_create", 00:07:41.351 "accel_assign_opc", 00:07:41.351 "accel_get_module_info", 00:07:41.351 "accel_get_opc_assignments", 00:07:41.351 "vmd_rescan", 00:07:41.351 "vmd_remove_device", 00:07:41.351 "vmd_enable", 00:07:41.351 "sock_get_default_impl", 00:07:41.351 "sock_set_default_impl", 00:07:41.351 "sock_impl_set_options", 00:07:41.351 "sock_impl_get_options", 00:07:41.351 "iobuf_get_stats", 00:07:41.351 "iobuf_set_options", 00:07:41.351 "keyring_get_keys", 00:07:41.351 "framework_get_pci_devices", 00:07:41.351 "framework_get_config", 00:07:41.351 "framework_get_subsystems", 00:07:41.351 "fsdev_set_opts", 00:07:41.351 "fsdev_get_opts", 00:07:41.351 "trace_get_info", 00:07:41.351 "trace_get_tpoint_group_mask", 00:07:41.351 "trace_disable_tpoint_group", 00:07:41.351 "trace_enable_tpoint_group", 00:07:41.351 "trace_clear_tpoint_mask", 00:07:41.351 "trace_set_tpoint_mask", 00:07:41.351 "notify_get_notifications", 00:07:41.351 "notify_get_types", 00:07:41.351 "spdk_get_version", 00:07:41.351 "rpc_get_methods" 00:07:41.351 ] 00:07:41.351 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.351 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:41.351 11:16:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59252 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59252 ']' 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59252 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59252 00:07:41.351 killing process with pid 59252 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59252' 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59252 00:07:41.351 11:16:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59252 00:07:43.910 ************************************ 00:07:43.910 END TEST spdkcli_tcp 00:07:43.910 ************************************ 00:07:43.910 00:07:43.910 real 0m4.050s 00:07:43.910 user 0m7.488s 00:07:43.910 sys 0m0.508s 00:07:43.910 11:16:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.910 11:16:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 11:16:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:43.910 11:16:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.910 11:16:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.910 11:16:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 ************************************ 00:07:43.910 START TEST dpdk_mem_utility 00:07:43.910 ************************************ 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:43.910 * Looking for test storage... 00:07:43.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.910 11:16:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.910 --rc genhtml_branch_coverage=1 00:07:43.910 --rc genhtml_function_coverage=1 00:07:43.910 --rc genhtml_legend=1 00:07:43.910 --rc geninfo_all_blocks=1 00:07:43.910 --rc geninfo_unexecuted_blocks=1 00:07:43.910 00:07:43.910 ' 00:07:43.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.910 --rc genhtml_branch_coverage=1 00:07:43.910 --rc genhtml_function_coverage=1 00:07:43.910 --rc genhtml_legend=1 00:07:43.910 --rc geninfo_all_blocks=1 00:07:43.910 --rc geninfo_unexecuted_blocks=1 00:07:43.910 00:07:43.910 ' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.910 --rc genhtml_branch_coverage=1 00:07:43.910 --rc genhtml_function_coverage=1 00:07:43.910 --rc genhtml_legend=1 00:07:43.910 --rc geninfo_all_blocks=1 00:07:43.910 --rc geninfo_unexecuted_blocks=1 00:07:43.910 00:07:43.910 ' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.910 --rc genhtml_branch_coverage=1 00:07:43.910 --rc genhtml_function_coverage=1 00:07:43.910 --rc genhtml_legend=1 00:07:43.910 --rc geninfo_all_blocks=1 00:07:43.910 --rc geninfo_unexecuted_blocks=1 00:07:43.910 00:07:43.910 ' 00:07:43.910 11:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:43.910 11:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59377 00:07:43.910 11:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59377 00:07:43.910 11:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59377 ']' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.910 11:16:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 [2024-12-10 11:16:06.002022] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:43.910 [2024-12-10 11:16:06.002356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59377 ] 00:07:44.168 [2024-12-10 11:16:06.186122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.168 [2024-12-10 11:16:06.309767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.106 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.106 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:45.106 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:45.106 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:45.106 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.106 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:45.106 { 00:07:45.106 "filename": "/tmp/spdk_mem_dump.txt" 00:07:45.106 } 00:07:45.106 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.106 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:45.106 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:45.106 1 heaps totaling size 824.000000 MiB 00:07:45.106 size: 824.000000 MiB heap id: 0 00:07:45.106 end heaps---------- 00:07:45.106 9 mempools totaling size 603.782043 MiB 00:07:45.106 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:45.106 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:45.106 size: 100.555481 MiB name: bdev_io_59377 00:07:45.106 size: 50.003479 MiB name: msgpool_59377 00:07:45.106 size: 36.509338 MiB name: fsdev_io_59377 00:07:45.106 size: 21.763794 MiB name: PDU_Pool 00:07:45.106 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:45.106 size: 4.133484 MiB name: evtpool_59377 00:07:45.106 size: 0.026123 MiB name: Session_Pool 00:07:45.106 end mempools------- 00:07:45.106 6 memzones totaling size 4.142822 MiB 00:07:45.106 size: 1.000366 MiB name: RG_ring_0_59377 00:07:45.106 size: 1.000366 MiB name: RG_ring_1_59377 00:07:45.106 size: 1.000366 MiB name: RG_ring_4_59377 00:07:45.106 size: 1.000366 MiB name: RG_ring_5_59377 00:07:45.106 size: 0.125366 MiB name: RG_ring_2_59377 00:07:45.106 size: 0.015991 MiB name: RG_ring_3_59377 00:07:45.106 end memzones------- 00:07:45.106 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:45.106 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:07:45.106 list of free elements. size: 16.781860 MiB 00:07:45.106 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:45.106 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:45.106 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:45.106 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:45.106 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:45.106 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:45.106 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:45.106 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:45.106 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:45.106 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:45.106 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:45.106 element at address: 0x20001b400000 with size: 0.563171 MiB 00:07:45.106 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:45.106 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:45.106 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:45.106 element at address: 0x200012c00000 with size: 0.433472 MiB 00:07:45.106 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:45.106 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:45.106 list of standard malloc elements. size: 199.287231 MiB 00:07:45.106 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:45.106 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:45.106 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:45.106 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:45.106 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:45.106 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:45.106 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:45.106 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:45.106 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:45.106 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:45.106 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:45.106 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:45.106 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:45.107 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:45.107 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:45.108 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:45.108 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:45.108 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:45.108 list of memzone associated elements. size: 607.930908 MiB 00:07:45.108 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:45.108 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:45.108 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:45.108 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:45.108 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:45.108 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59377_0 00:07:45.108 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:45.108 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59377_0 00:07:45.108 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:45.108 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59377_0 00:07:45.108 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:45.108 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:45.108 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:45.108 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:45.109 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:45.109 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59377_0 00:07:45.109 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:45.109 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59377 00:07:45.109 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:45.109 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59377 00:07:45.109 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:45.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:45.109 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:45.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:45.109 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:45.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:45.109 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:45.109 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:45.109 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:45.109 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59377 00:07:45.109 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:45.109 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59377 00:07:45.109 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:45.109 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59377 00:07:45.109 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:45.109 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59377 00:07:45.109 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:45.109 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59377 00:07:45.109 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:45.109 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59377 00:07:45.109 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:45.109 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:45.109 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:45.109 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:45.109 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:45.109 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:45.109 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:45.109 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59377 00:07:45.109 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:45.109 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59377 00:07:45.109 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:45.109 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:45.109 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:45.109 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:45.109 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:45.109 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59377 00:07:45.109 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:45.109 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:45.109 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:45.109 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59377 00:07:45.109 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:45.109 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59377 00:07:45.109 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:45.109 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59377 00:07:45.109 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:45.109 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:45.367 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:45.367 11:16:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59377 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59377 ']' 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59377 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59377 00:07:45.367 killing process with pid 59377 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59377' 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59377 00:07:45.367 11:16:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59377 00:07:47.896 00:07:47.896 real 0m3.903s 00:07:47.896 user 0m4.075s 00:07:47.896 sys 0m0.519s 00:07:47.896 11:16:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.896 ************************************ 00:07:47.896 END TEST dpdk_mem_utility 00:07:47.896 ************************************ 00:07:47.896 11:16:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:47.896 11:16:09 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:47.896 11:16:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.896 11:16:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.896 11:16:09 -- common/autotest_common.sh@10 -- # set +x 00:07:47.896 ************************************ 00:07:47.896 START TEST event 00:07:47.896 ************************************ 00:07:47.896 11:16:09 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:47.896 * Looking for test storage... 00:07:47.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:47.896 11:16:09 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:47.896 11:16:09 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:47.896 11:16:09 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:47.896 11:16:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:47.896 11:16:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.896 11:16:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.896 11:16:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.896 11:16:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.896 11:16:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.896 11:16:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.896 11:16:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.896 11:16:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.896 11:16:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.896 11:16:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.896 11:16:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.896 11:16:09 event -- scripts/common.sh@344 -- # case "$op" in 00:07:47.896 11:16:09 event -- scripts/common.sh@345 -- # : 1 00:07:47.896 11:16:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.897 11:16:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.897 11:16:09 event -- scripts/common.sh@365 -- # decimal 1 00:07:47.897 11:16:09 event -- scripts/common.sh@353 -- # local d=1 00:07:47.897 11:16:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.897 11:16:09 event -- scripts/common.sh@355 -- # echo 1 00:07:47.897 11:16:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.897 11:16:09 event -- scripts/common.sh@366 -- # decimal 2 00:07:47.897 11:16:09 event -- scripts/common.sh@353 -- # local d=2 00:07:47.897 11:16:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.897 11:16:09 event -- scripts/common.sh@355 -- # echo 2 00:07:47.897 11:16:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.897 11:16:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.897 11:16:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.897 11:16:09 event -- scripts/common.sh@368 -- # return 0 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.897 --rc genhtml_branch_coverage=1 00:07:47.897 --rc genhtml_function_coverage=1 00:07:47.897 --rc genhtml_legend=1 00:07:47.897 --rc geninfo_all_blocks=1 00:07:47.897 --rc geninfo_unexecuted_blocks=1 00:07:47.897 00:07:47.897 ' 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.897 --rc genhtml_branch_coverage=1 00:07:47.897 --rc genhtml_function_coverage=1 00:07:47.897 --rc genhtml_legend=1 00:07:47.897 --rc geninfo_all_blocks=1 00:07:47.897 --rc geninfo_unexecuted_blocks=1 00:07:47.897 00:07:47.897 ' 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.897 --rc genhtml_branch_coverage=1 00:07:47.897 --rc genhtml_function_coverage=1 00:07:47.897 --rc genhtml_legend=1 00:07:47.897 --rc geninfo_all_blocks=1 00:07:47.897 --rc geninfo_unexecuted_blocks=1 00:07:47.897 00:07:47.897 ' 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.897 --rc genhtml_branch_coverage=1 00:07:47.897 --rc genhtml_function_coverage=1 00:07:47.897 --rc genhtml_legend=1 00:07:47.897 --rc geninfo_all_blocks=1 00:07:47.897 --rc geninfo_unexecuted_blocks=1 00:07:47.897 00:07:47.897 ' 00:07:47.897 11:16:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:47.897 11:16:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:47.897 11:16:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:47.897 11:16:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.897 11:16:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.897 ************************************ 00:07:47.897 START TEST event_perf 00:07:47.897 ************************************ 00:07:47.897 11:16:09 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:47.897 Running I/O for 1 seconds...[2024-12-10 11:16:09.861572] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:47.897 [2024-12-10 11:16:09.862724] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59485 ] 00:07:47.897 [2024-12-10 11:16:10.056917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.155 [2024-12-10 11:16:10.174134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.155 [2024-12-10 11:16:10.174561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.155 Running I/O for 1 seconds...[2024-12-10 11:16:10.174660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.155 [2024-12-10 11:16:10.174660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.530 00:07:49.531 lcore 0: 162742 00:07:49.531 lcore 1: 162741 00:07:49.531 lcore 2: 162742 00:07:49.531 lcore 3: 162741 00:07:49.531 done. 00:07:49.531 00:07:49.531 real 0m1.606s 00:07:49.531 user 0m4.338s 00:07:49.531 sys 0m0.122s 00:07:49.531 11:16:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.531 11:16:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.531 ************************************ 00:07:49.531 END TEST event_perf 00:07:49.531 ************************************ 00:07:49.531 11:16:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:49.531 11:16:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:49.531 11:16:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.531 11:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.531 ************************************ 00:07:49.531 START TEST event_reactor 00:07:49.531 ************************************ 00:07:49.531 11:16:11 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:49.531 [2024-12-10 11:16:11.503403] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:49.531 [2024-12-10 11:16:11.503747] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:07:49.531 [2024-12-10 11:16:11.675152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.789 [2024-12-10 11:16:11.778424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.167 test_start 00:07:51.167 oneshot 00:07:51.167 tick 100 00:07:51.167 tick 100 00:07:51.167 tick 250 00:07:51.167 tick 100 00:07:51.167 tick 100 00:07:51.167 tick 100 00:07:51.167 tick 250 00:07:51.167 tick 500 00:07:51.167 tick 100 00:07:51.167 tick 100 00:07:51.167 tick 250 00:07:51.167 tick 100 00:07:51.167 tick 100 00:07:51.167 test_end 00:07:51.167 ************************************ 00:07:51.167 END TEST event_reactor 00:07:51.167 ************************************ 00:07:51.167 00:07:51.167 real 0m1.534s 00:07:51.167 user 0m1.348s 00:07:51.167 sys 0m0.078s 00:07:51.167 11:16:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.167 11:16:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 11:16:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:51.167 11:16:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:51.167 11:16:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.167 11:16:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 ************************************ 00:07:51.167 START TEST event_reactor_perf 00:07:51.167 ************************************ 00:07:51.167 11:16:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:51.167 [2024-12-10 11:16:13.078898] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:51.167 [2024-12-10 11:16:13.079041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:07:51.167 [2024-12-10 11:16:13.249559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.427 [2024-12-10 11:16:13.352557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.806 test_start 00:07:52.806 test_end 00:07:52.806 Performance: 268633 events per second 00:07:52.806 ************************************ 00:07:52.806 END TEST event_reactor_perf 00:07:52.806 ************************************ 00:07:52.806 00:07:52.806 real 0m1.527s 00:07:52.806 user 0m1.347s 00:07:52.806 sys 0m0.071s 00:07:52.806 11:16:14 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.806 11:16:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.806 11:16:14 event -- event/event.sh@49 -- # uname -s 00:07:52.806 11:16:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:52.806 11:16:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:52.806 11:16:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.806 11:16:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.806 11:16:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.806 ************************************ 00:07:52.806 START TEST event_scheduler 00:07:52.806 ************************************ 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:52.806 * Looking for test storage... 00:07:52.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.806 11:16:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.806 --rc genhtml_branch_coverage=1 00:07:52.806 --rc genhtml_function_coverage=1 00:07:52.806 --rc genhtml_legend=1 00:07:52.806 --rc geninfo_all_blocks=1 00:07:52.806 --rc geninfo_unexecuted_blocks=1 00:07:52.806 00:07:52.806 ' 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.806 --rc genhtml_branch_coverage=1 00:07:52.806 --rc genhtml_function_coverage=1 00:07:52.806 --rc genhtml_legend=1 00:07:52.806 --rc geninfo_all_blocks=1 00:07:52.806 --rc geninfo_unexecuted_blocks=1 00:07:52.806 00:07:52.806 ' 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.806 --rc genhtml_branch_coverage=1 00:07:52.806 --rc genhtml_function_coverage=1 00:07:52.806 --rc genhtml_legend=1 00:07:52.806 --rc geninfo_all_blocks=1 00:07:52.806 --rc geninfo_unexecuted_blocks=1 00:07:52.806 00:07:52.806 ' 00:07:52.806 11:16:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.806 --rc genhtml_branch_coverage=1 00:07:52.806 --rc genhtml_function_coverage=1 00:07:52.806 --rc genhtml_legend=1 00:07:52.806 --rc geninfo_all_blocks=1 00:07:52.806 --rc geninfo_unexecuted_blocks=1 00:07:52.806 00:07:52.806 ' 00:07:52.806 11:16:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:52.806 11:16:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59639 00:07:52.806 11:16:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:52.806 11:16:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.806 11:16:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59639 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59639 ']' 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.807 11:16:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:52.807 [2024-12-10 11:16:14.928061] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:52.807 [2024-12-10 11:16:14.928773] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:07:53.066 [2024-12-10 11:16:15.110191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.066 [2024-12-10 11:16:15.218714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.066 [2024-12-10 11:16:15.218830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.066 [2024-12-10 11:16:15.218899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.066 [2024-12-10 11:16:15.219469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:53.999 11:16:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:53.999 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:53.999 POWER: Cannot set governor of lcore 0 to userspace 00:07:53.999 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:53.999 POWER: Cannot set governor of lcore 0 to performance 00:07:53.999 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:53.999 POWER: Cannot set governor of lcore 0 to userspace 00:07:53.999 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:53.999 POWER: Cannot set governor of lcore 0 to userspace 00:07:53.999 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:53.999 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:53.999 POWER: Unable to set Power Management Environment for lcore 0 00:07:53.999 [2024-12-10 11:16:15.985359] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:53.999 [2024-12-10 11:16:15.985406] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:53.999 [2024-12-10 11:16:15.985430] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:53.999 [2024-12-10 11:16:15.985466] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:53.999 [2024-12-10 11:16:15.985488] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:53.999 [2024-12-10 11:16:15.985510] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.999 11:16:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.999 11:16:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.257 [2024-12-10 11:16:16.279850] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:54.257 11:16:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.257 11:16:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:54.257 11:16:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.257 11:16:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.257 11:16:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.257 ************************************ 00:07:54.257 START TEST scheduler_create_thread 00:07:54.257 ************************************ 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.257 2 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.257 3 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.257 4 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.257 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 5 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 6 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 7 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 8 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 9 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 10 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.258 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.629 ************************************ 00:07:55.629 END TEST scheduler_create_thread 00:07:55.629 ************************************ 00:07:55.629 11:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.629 00:07:55.629 real 0m1.173s 00:07:55.629 user 0m0.013s 00:07:55.629 sys 0m0.005s 00:07:55.629 11:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.629 11:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.629 11:16:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:55.629 11:16:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59639 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59639 ']' 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59639 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59639 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59639' 00:07:55.629 killing process with pid 59639 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59639 00:07:55.629 11:16:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59639 00:07:55.886 [2024-12-10 11:16:17.943281] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:56.819 00:07:56.819 real 0m4.345s 00:07:56.819 user 0m7.633s 00:07:56.819 sys 0m0.407s 00:07:56.819 11:16:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.819 ************************************ 00:07:56.819 END TEST event_scheduler 00:07:56.819 ************************************ 00:07:56.819 11:16:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.077 11:16:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:57.077 11:16:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:57.077 11:16:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.077 11:16:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.077 11:16:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.077 ************************************ 00:07:57.077 START TEST app_repeat 00:07:57.077 ************************************ 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:57.077 Process app_repeat pid: 59736 00:07:57.077 spdk_app_start Round 0 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59736 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59736' 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:57.077 11:16:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59736 /var/tmp/spdk-nbd.sock 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:07:57.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.077 11:16:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:57.077 [2024-12-10 11:16:19.074234] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:57.077 [2024-12-10 11:16:19.074427] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:07:57.336 [2024-12-10 11:16:19.281659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.336 [2024-12-10 11:16:19.401066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.336 [2024-12-10 11:16:19.401073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.269 11:16:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.269 11:16:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:58.270 11:16:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:58.528 Malloc0 00:07:58.528 11:16:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:59.094 Malloc1 00:07:59.094 11:16:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.094 11:16:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:59.094 /dev/nbd0 00:07:59.352 11:16:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:59.352 11:16:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.352 1+0 records in 00:07:59.352 1+0 records out 00:07:59.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313261 s, 13.1 MB/s 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.352 11:16:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:59.352 11:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.352 11:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.352 11:16:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:59.610 /dev/nbd1 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.610 1+0 records in 00:07:59.610 1+0 records out 00:07:59.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310153 s, 13.2 MB/s 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.610 11:16:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.610 11:16:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.868 { 00:07:59.868 "nbd_device": "/dev/nbd0", 00:07:59.868 "bdev_name": "Malloc0" 00:07:59.868 }, 00:07:59.868 { 00:07:59.868 "nbd_device": "/dev/nbd1", 00:07:59.868 "bdev_name": "Malloc1" 00:07:59.868 } 00:07:59.868 ]' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.868 { 00:07:59.868 "nbd_device": "/dev/nbd0", 00:07:59.868 "bdev_name": "Malloc0" 00:07:59.868 }, 00:07:59.868 { 00:07:59.868 "nbd_device": "/dev/nbd1", 00:07:59.868 "bdev_name": "Malloc1" 00:07:59.868 } 00:07:59.868 ]' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:59.868 /dev/nbd1' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:59.868 /dev/nbd1' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:59.868 256+0 records in 00:07:59.868 256+0 records out 00:07:59.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077464 s, 135 MB/s 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.868 11:16:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:59.868 256+0 records in 00:07:59.868 256+0 records out 00:07:59.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242313 s, 43.3 MB/s 00:07:59.868 11:16:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.868 11:16:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.126 256+0 records in 00:08:00.126 256+0 records out 00:08:00.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036085 s, 29.1 MB/s 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.126 11:16:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.387 11:16:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.651 11:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.910 11:16:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.910 11:16:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:01.476 11:16:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:02.411 [2024-12-10 11:16:24.424425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.411 [2024-12-10 11:16:24.523263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.411 [2024-12-10 11:16:24.523274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.669 [2024-12-10 11:16:24.687849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:02.669 [2024-12-10 11:16:24.687937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:04.567 spdk_app_start Round 1 00:08:04.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:04.567 11:16:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:04.567 11:16:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:04.567 11:16:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59736 /var/tmp/spdk-nbd.sock 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.567 11:16:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:04.567 11:16:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:04.826 Malloc0 00:08:05.173 11:16:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.173 Malloc1 00:08:05.431 11:16:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.431 11:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:05.689 /dev/nbd0 00:08:05.689 11:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:05.689 11:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:05.689 1+0 records in 00:08:05.689 1+0 records out 00:08:05.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425993 s, 9.6 MB/s 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.689 11:16:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:05.689 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.689 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.689 11:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:05.947 /dev/nbd1 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:05.947 1+0 records in 00:08:05.947 1+0 records out 00:08:05.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278285 s, 14.7 MB/s 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.947 11:16:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.205 { 00:08:06.205 "nbd_device": "/dev/nbd0", 00:08:06.205 "bdev_name": "Malloc0" 00:08:06.205 }, 00:08:06.205 { 00:08:06.205 "nbd_device": "/dev/nbd1", 00:08:06.205 "bdev_name": "Malloc1" 00:08:06.205 } 00:08:06.205 ]' 00:08:06.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.205 { 00:08:06.205 "nbd_device": "/dev/nbd0", 00:08:06.205 "bdev_name": "Malloc0" 00:08:06.205 }, 00:08:06.205 { 00:08:06.205 "nbd_device": "/dev/nbd1", 00:08:06.205 "bdev_name": "Malloc1" 00:08:06.205 } 00:08:06.205 ]' 00:08:06.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.464 /dev/nbd1' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.464 /dev/nbd1' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:06.464 256+0 records in 00:08:06.464 256+0 records out 00:08:06.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00970305 s, 108 MB/s 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.464 256+0 records in 00:08:06.464 256+0 records out 00:08:06.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024319 s, 43.1 MB/s 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.464 256+0 records in 00:08:06.464 256+0 records out 00:08:06.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312708 s, 33.5 MB/s 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.464 11:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.722 11:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.979 11:16:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.237 11:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:07.495 11:16:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:07.495 11:16:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:08.060 11:16:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:08.993 [2024-12-10 11:16:30.924545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.993 [2024-12-10 11:16:31.023495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.993 [2024-12-10 11:16:31.023497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.251 [2024-12-10 11:16:31.191019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:09.251 [2024-12-10 11:16:31.191147] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:11.219 11:16:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:11.219 spdk_app_start Round 2 00:08:11.219 11:16:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:11.219 11:16:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59736 /var/tmp/spdk-nbd.sock 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.219 11:16:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:11.219 11:16:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.219 11:16:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:11.219 11:16:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.477 Malloc0 00:08:11.477 11:16:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:12.043 Malloc1 00:08:12.043 11:16:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.043 11:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:12.301 /dev/nbd0 00:08:12.301 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.301 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.301 1+0 records in 00:08:12.301 1+0 records out 00:08:12.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293829 s, 13.9 MB/s 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.301 11:16:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:12.301 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.301 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.301 11:16:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:12.559 /dev/nbd1 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.559 1+0 records in 00:08:12.559 1+0 records out 00:08:12.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046054 s, 8.9 MB/s 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.559 11:16:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.559 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.125 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:13.125 { 00:08:13.125 "nbd_device": "/dev/nbd0", 00:08:13.125 "bdev_name": "Malloc0" 00:08:13.125 }, 00:08:13.125 { 00:08:13.125 "nbd_device": "/dev/nbd1", 00:08:13.125 "bdev_name": "Malloc1" 00:08:13.125 } 00:08:13.125 ]' 00:08:13.125 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:13.125 { 00:08:13.125 "nbd_device": "/dev/nbd0", 00:08:13.125 "bdev_name": "Malloc0" 00:08:13.125 }, 00:08:13.125 { 00:08:13.125 "nbd_device": "/dev/nbd1", 00:08:13.125 "bdev_name": "Malloc1" 00:08:13.125 } 00:08:13.125 ]' 00:08:13.125 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:13.125 /dev/nbd1' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:13.125 /dev/nbd1' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:13.125 256+0 records in 00:08:13.125 256+0 records out 00:08:13.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598102 s, 175 MB/s 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:13.125 256+0 records in 00:08:13.125 256+0 records out 00:08:13.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301002 s, 34.8 MB/s 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:13.125 256+0 records in 00:08:13.125 256+0 records out 00:08:13.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357916 s, 29.3 MB/s 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.125 11:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.383 11:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.640 11:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.897 11:16:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.897 11:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.897 11:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.154 11:16:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.154 11:16:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:14.412 11:16:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:15.796 [2024-12-10 11:16:37.601403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.796 [2024-12-10 11:16:37.700839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.796 [2024-12-10 11:16:37.700849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.796 [2024-12-10 11:16:37.870008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.796 [2024-12-10 11:16:37.870115] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:17.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:17.698 11:16:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59736 /var/tmp/spdk-nbd.sock 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:17.698 11:16:39 event.app_repeat -- event/event.sh@39 -- # killprocess 59736 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59736 ']' 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59736 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.698 11:16:39 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59736 00:08:17.956 killing process with pid 59736 00:08:17.956 11:16:39 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.956 11:16:39 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.956 11:16:39 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59736' 00:08:17.956 11:16:39 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59736 00:08:17.956 11:16:39 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59736 00:08:18.891 spdk_app_start is called in Round 0. 00:08:18.891 Shutdown signal received, stop current app iteration 00:08:18.891 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:08:18.891 spdk_app_start is called in Round 1. 00:08:18.891 Shutdown signal received, stop current app iteration 00:08:18.891 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:08:18.891 spdk_app_start is called in Round 2. 00:08:18.891 Shutdown signal received, stop current app iteration 00:08:18.891 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:08:18.891 spdk_app_start is called in Round 3. 00:08:18.891 Shutdown signal received, stop current app iteration 00:08:18.891 11:16:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:18.891 11:16:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:18.891 00:08:18.891 real 0m21.797s 00:08:18.891 user 0m48.866s 00:08:18.891 sys 0m2.783s 00:08:18.891 11:16:40 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.891 11:16:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.891 ************************************ 00:08:18.891 END TEST app_repeat 00:08:18.891 ************************************ 00:08:18.891 11:16:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:18.891 11:16:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:18.891 11:16:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.891 11:16:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.891 11:16:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.891 ************************************ 00:08:18.891 START TEST cpu_locks 00:08:18.891 ************************************ 00:08:18.891 11:16:40 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:18.891 * Looking for test storage... 00:08:18.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:18.891 11:16:40 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.891 11:16:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.891 11:16:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.891 11:16:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.891 --rc genhtml_branch_coverage=1 00:08:18.891 --rc genhtml_function_coverage=1 00:08:18.891 --rc genhtml_legend=1 00:08:18.891 --rc geninfo_all_blocks=1 00:08:18.891 --rc geninfo_unexecuted_blocks=1 00:08:18.891 00:08:18.891 ' 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.891 --rc genhtml_branch_coverage=1 00:08:18.891 --rc genhtml_function_coverage=1 00:08:18.891 --rc genhtml_legend=1 00:08:18.891 --rc geninfo_all_blocks=1 00:08:18.891 --rc geninfo_unexecuted_blocks=1 00:08:18.891 00:08:18.891 ' 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.891 --rc genhtml_branch_coverage=1 00:08:18.891 --rc genhtml_function_coverage=1 00:08:18.891 --rc genhtml_legend=1 00:08:18.891 --rc geninfo_all_blocks=1 00:08:18.891 --rc geninfo_unexecuted_blocks=1 00:08:18.891 00:08:18.891 ' 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.891 --rc genhtml_branch_coverage=1 00:08:18.891 --rc genhtml_function_coverage=1 00:08:18.891 --rc genhtml_legend=1 00:08:18.891 --rc geninfo_all_blocks=1 00:08:18.891 --rc geninfo_unexecuted_blocks=1 00:08:18.891 00:08:18.891 ' 00:08:18.891 11:16:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:18.891 11:16:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:18.891 11:16:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:18.891 11:16:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.891 11:16:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.891 ************************************ 00:08:18.892 START TEST default_locks 00:08:18.892 ************************************ 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60205 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60205 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.892 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.150 [2024-12-10 11:16:41.161485] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:19.150 [2024-12-10 11:16:41.161656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:08:19.409 [2024-12-10 11:16:41.353962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.409 [2024-12-10 11:16:41.506082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.345 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.345 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:20.345 11:16:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60205 00:08:20.345 11:16:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60205 00:08:20.345 11:16:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60205 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60205 ']' 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60205 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60205 00:08:20.604 killing process with pid 60205 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60205' 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60205 00:08:20.604 11:16:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60205 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60205 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60205 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:23.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.135 ERROR: process (pid: 60205) is no longer running 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60205 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.135 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60205) - No such process 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:23.135 00:08:23.135 real 0m3.806s 00:08:23.135 user 0m4.026s 00:08:23.135 sys 0m0.620s 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.135 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.135 ************************************ 00:08:23.135 END TEST default_locks 00:08:23.135 ************************************ 00:08:23.135 11:16:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:23.135 11:16:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.135 11:16:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.135 11:16:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.135 ************************************ 00:08:23.135 START TEST default_locks_via_rpc 00:08:23.135 ************************************ 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60280 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60280 00:08:23.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60280 ']' 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.135 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.135 [2024-12-10 11:16:44.989975] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:23.135 [2024-12-10 11:16:44.990125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:08:23.135 [2024-12-10 11:16:45.162763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.135 [2024-12-10 11:16:45.265590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60280 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60280 00:08:24.070 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60280 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60280 ']' 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60280 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60280 00:08:24.636 killing process with pid 60280 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60280' 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60280 00:08:24.636 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60280 00:08:26.536 ************************************ 00:08:26.536 END TEST default_locks_via_rpc 00:08:26.536 ************************************ 00:08:26.536 00:08:26.536 real 0m3.781s 00:08:26.536 user 0m3.959s 00:08:26.536 sys 0m0.628s 00:08:26.536 11:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.536 11:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.795 11:16:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:26.795 11:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.795 11:16:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.795 11:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.795 ************************************ 00:08:26.795 START TEST non_locking_app_on_locked_coremask 00:08:26.795 ************************************ 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:26.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60349 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60349 /var/tmp/spdk.sock 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60349 ']' 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.795 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.795 [2024-12-10 11:16:48.838281] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:26.795 [2024-12-10 11:16:48.838732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60349 ] 00:08:27.053 [2024-12-10 11:16:49.018781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.053 [2024-12-10 11:16:49.125896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60370 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60370 /var/tmp/spdk2.sock 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.988 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.988 [2024-12-10 11:16:50.042608] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:27.988 [2024-12-10 11:16:50.043020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60370 ] 00:08:28.247 [2024-12-10 11:16:50.236027] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.247 [2024-12-10 11:16:50.236101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.506 [2024-12-10 11:16:50.445572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.926 11:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.926 11:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:29.926 11:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60349 00:08:29.926 11:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60349 00:08:29.926 11:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60349 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60349 ']' 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60349 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60349 00:08:30.860 killing process with pid 60349 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60349' 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60349 00:08:30.860 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60349 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60370 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60370 ']' 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60370 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60370 00:08:35.087 killing process with pid 60370 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60370' 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60370 00:08:35.087 11:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60370 00:08:37.654 ************************************ 00:08:37.654 END TEST non_locking_app_on_locked_coremask 00:08:37.654 ************************************ 00:08:37.654 00:08:37.654 real 0m10.484s 00:08:37.654 user 0m11.039s 00:08:37.654 sys 0m1.255s 00:08:37.654 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.654 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.654 11:16:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:37.654 11:16:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.654 11:16:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.654 11:16:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.654 ************************************ 00:08:37.654 START TEST locking_app_on_unlocked_coremask 00:08:37.654 ************************************ 00:08:37.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60500 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60500 /var/tmp/spdk.sock 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60500 ']' 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.654 11:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.654 [2024-12-10 11:16:59.392766] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:37.654 [2024-12-10 11:16:59.392976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60500 ] 00:08:37.654 [2024-12-10 11:16:59.587984] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:37.654 [2024-12-10 11:16:59.588079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.654 [2024-12-10 11:16:59.719031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60527 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60527 /var/tmp/spdk2.sock 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60527 ']' 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.588 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.588 [2024-12-10 11:17:00.748982] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:38.588 [2024-12-10 11:17:00.749458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:08:38.846 [2024-12-10 11:17:00.943301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.104 [2024-12-10 11:17:01.152258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.004 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.004 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:41.004 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60527 00:08:41.004 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60527 00:08:41.004 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60500 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60500 ']' 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60500 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60500 00:08:41.571 killing process with pid 60500 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60500' 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60500 00:08:41.571 11:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60500 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60527 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60527 ']' 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60527 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60527 00:08:45.761 killing process with pid 60527 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60527' 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60527 00:08:45.761 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60527 00:08:48.293 00:08:48.293 real 0m10.615s 00:08:48.293 user 0m11.428s 00:08:48.293 sys 0m1.233s 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:48.293 ************************************ 00:08:48.293 END TEST locking_app_on_unlocked_coremask 00:08:48.293 ************************************ 00:08:48.293 11:17:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:48.293 11:17:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.293 11:17:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.293 11:17:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:48.293 ************************************ 00:08:48.293 START TEST locking_app_on_locked_coremask 00:08:48.293 ************************************ 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60662 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60662 /var/tmp/spdk.sock 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60662 ']' 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.293 11:17:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:48.293 [2024-12-10 11:17:10.012095] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:48.293 [2024-12-10 11:17:10.012256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60662 ] 00:08:48.293 [2024-12-10 11:17:10.189486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.293 [2024-12-10 11:17:10.322079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60678 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60678 /var/tmp/spdk2.sock 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60678 /var/tmp/spdk2.sock 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60678 /var/tmp/spdk2.sock 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60678 ']' 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:49.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.228 11:17:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.228 [2024-12-10 11:17:11.273375] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:49.228 [2024-12-10 11:17:11.274216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:08:49.486 [2024-12-10 11:17:11.468012] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60662 has claimed it. 00:08:49.486 [2024-12-10 11:17:11.468112] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:50.052 ERROR: process (pid: 60678) is no longer running 00:08:50.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60678) - No such process 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60662 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60662 00:08:50.052 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60662 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60662 ']' 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60662 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60662 00:08:50.310 killing process with pid 60662 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60662' 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60662 00:08:50.310 11:17:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60662 00:08:52.839 00:08:52.839 real 0m4.700s 00:08:52.839 user 0m5.268s 00:08:52.839 sys 0m0.785s 00:08:52.839 11:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.839 ************************************ 00:08:52.839 END TEST locking_app_on_locked_coremask 00:08:52.839 ************************************ 00:08:52.839 11:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.839 11:17:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.840 11:17:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.840 11:17:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.840 11:17:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.840 ************************************ 00:08:52.840 START TEST locking_overlapped_coremask 00:08:52.840 ************************************ 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60742 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60742 /var/tmp/spdk.sock 00:08:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60742 ']' 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.840 11:17:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.840 [2024-12-10 11:17:14.759766] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:52.840 [2024-12-10 11:17:14.760104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60742 ] 00:08:52.840 [2024-12-10 11:17:14.939353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.103 [2024-12-10 11:17:15.126654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.103 [2024-12-10 11:17:15.126678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.104 [2024-12-10 11:17:15.126690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60766 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60766 /var/tmp/spdk2.sock 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60766 /var/tmp/spdk2.sock 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:54.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60766 /var/tmp/spdk2.sock 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60766 ']' 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.040 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.041 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.041 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.041 11:17:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.041 [2024-12-10 11:17:16.081030] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:54.041 [2024-12-10 11:17:16.081230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60766 ] 00:08:54.298 [2024-12-10 11:17:16.291999] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60742 has claimed it. 00:08:54.298 [2024-12-10 11:17:16.292095] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:54.865 ERROR: process (pid: 60766) is no longer running 00:08:54.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60766) - No such process 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60742 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60742 ']' 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60742 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60742 00:08:54.865 killing process with pid 60742 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60742' 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60742 00:08:54.865 11:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60742 00:08:57.422 ************************************ 00:08:57.422 END TEST locking_overlapped_coremask 00:08:57.422 ************************************ 00:08:57.422 00:08:57.422 real 0m4.362s 00:08:57.422 user 0m11.994s 00:08:57.422 sys 0m0.596s 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.422 11:17:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:57.422 11:17:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.422 11:17:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.422 11:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.422 ************************************ 00:08:57.422 START TEST locking_overlapped_coremask_via_rpc 00:08:57.422 ************************************ 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60830 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60830 /var/tmp/spdk.sock 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60830 ']' 00:08:57.422 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.423 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.423 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.423 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.423 11:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 [2024-12-10 11:17:19.210938] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:57.423 [2024-12-10 11:17:19.211394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60830 ] 00:08:57.423 [2024-12-10 11:17:19.413124] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:57.423 [2024-12-10 11:17:19.413257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.423 [2024-12-10 11:17:19.529091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.423 [2024-12-10 11:17:19.529171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.423 [2024-12-10 11:17:19.529172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60852 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60852 /var/tmp/spdk2.sock 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60852 ']' 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.358 11:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.358 [2024-12-10 11:17:20.503854] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:58.358 [2024-12-10 11:17:20.504050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:08:58.618 [2024-12-10 11:17:20.726736] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:58.618 [2024-12-10 11:17:20.726866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:58.916 [2024-12-10 11:17:20.938544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.916 [2024-12-10 11:17:20.941731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.916 [2024-12-10 11:17:20.941731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.446 [2024-12-10 11:17:23.313005] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60830 has claimed it. 00:09:01.446 request: 00:09:01.446 { 00:09:01.446 "method": "framework_enable_cpumask_locks", 00:09:01.446 "req_id": 1 00:09:01.446 } 00:09:01.446 Got JSON-RPC error response 00:09:01.446 response: 00:09:01.446 { 00:09:01.446 "code": -32603, 00:09:01.446 "message": "Failed to claim CPU core: 2" 00:09:01.446 } 00:09:01.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60830 /var/tmp/spdk.sock 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60830 ']' 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.446 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60852 /var/tmp/spdk2.sock 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60852 ']' 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:01.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.704 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:01.963 ************************************ 00:09:01.963 END TEST locking_overlapped_coremask_via_rpc 00:09:01.963 ************************************ 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:01.963 00:09:01.963 real 0m4.872s 00:09:01.963 user 0m1.823s 00:09:01.963 sys 0m0.221s 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.963 11:17:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.963 11:17:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:01.963 11:17:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60830 ]] 00:09:01.963 11:17:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60830 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60830 ']' 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60830 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60830 00:09:01.963 killing process with pid 60830 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60830' 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60830 00:09:01.963 11:17:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60830 00:09:04.517 11:17:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60852 ]] 00:09:04.517 11:17:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60852 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60852 ']' 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60852 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60852 00:09:04.517 killing process with pid 60852 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60852' 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60852 00:09:04.517 11:17:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60852 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60830 ]] 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60830 00:09:06.420 Process with pid 60830 is not found 00:09:06.420 Process with pid 60852 is not found 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60830 ']' 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60830 00:09:06.420 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60830) - No such process 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60830 is not found' 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60852 ]] 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60852 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60852 ']' 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60852 00:09:06.420 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60852) - No such process 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60852 is not found' 00:09:06.420 11:17:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:06.420 00:09:06.420 real 0m47.427s 00:09:06.420 user 1m25.251s 00:09:06.420 sys 0m6.324s 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.420 11:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.420 ************************************ 00:09:06.420 END TEST cpu_locks 00:09:06.420 ************************************ 00:09:06.420 ************************************ 00:09:06.420 END TEST event 00:09:06.420 ************************************ 00:09:06.420 00:09:06.420 real 1m18.686s 00:09:06.420 user 2m28.980s 00:09:06.420 sys 0m10.017s 00:09:06.420 11:17:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.420 11:17:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:06.420 11:17:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:06.420 11:17:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.420 11:17:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.420 11:17:28 -- common/autotest_common.sh@10 -- # set +x 00:09:06.420 ************************************ 00:09:06.420 START TEST thread 00:09:06.420 ************************************ 00:09:06.420 11:17:28 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:06.420 * Looking for test storage... 00:09:06.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:06.420 11:17:28 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.420 11:17:28 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.420 11:17:28 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.420 11:17:28 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.420 11:17:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.420 11:17:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.420 11:17:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.420 11:17:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.420 11:17:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.420 11:17:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.420 11:17:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.420 11:17:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.420 11:17:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.420 11:17:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.420 11:17:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.420 11:17:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:06.420 11:17:28 thread -- scripts/common.sh@345 -- # : 1 00:09:06.420 11:17:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.420 11:17:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.420 11:17:28 thread -- scripts/common.sh@365 -- # decimal 1 00:09:06.420 11:17:28 thread -- scripts/common.sh@353 -- # local d=1 00:09:06.420 11:17:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.420 11:17:28 thread -- scripts/common.sh@355 -- # echo 1 00:09:06.420 11:17:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.679 11:17:28 thread -- scripts/common.sh@366 -- # decimal 2 00:09:06.679 11:17:28 thread -- scripts/common.sh@353 -- # local d=2 00:09:06.679 11:17:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.679 11:17:28 thread -- scripts/common.sh@355 -- # echo 2 00:09:06.679 11:17:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.679 11:17:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.679 11:17:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.679 11:17:28 thread -- scripts/common.sh@368 -- # return 0 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.679 --rc genhtml_branch_coverage=1 00:09:06.679 --rc genhtml_function_coverage=1 00:09:06.679 --rc genhtml_legend=1 00:09:06.679 --rc geninfo_all_blocks=1 00:09:06.679 --rc geninfo_unexecuted_blocks=1 00:09:06.679 00:09:06.679 ' 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.679 --rc genhtml_branch_coverage=1 00:09:06.679 --rc genhtml_function_coverage=1 00:09:06.679 --rc genhtml_legend=1 00:09:06.679 --rc geninfo_all_blocks=1 00:09:06.679 --rc geninfo_unexecuted_blocks=1 00:09:06.679 00:09:06.679 ' 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.679 --rc genhtml_branch_coverage=1 00:09:06.679 --rc genhtml_function_coverage=1 00:09:06.679 --rc genhtml_legend=1 00:09:06.679 --rc geninfo_all_blocks=1 00:09:06.679 --rc geninfo_unexecuted_blocks=1 00:09:06.679 00:09:06.679 ' 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.679 --rc genhtml_branch_coverage=1 00:09:06.679 --rc genhtml_function_coverage=1 00:09:06.679 --rc genhtml_legend=1 00:09:06.679 --rc geninfo_all_blocks=1 00:09:06.679 --rc geninfo_unexecuted_blocks=1 00:09:06.679 00:09:06.679 ' 00:09:06.679 11:17:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.679 11:17:28 thread -- common/autotest_common.sh@10 -- # set +x 00:09:06.679 ************************************ 00:09:06.679 START TEST thread_poller_perf 00:09:06.679 ************************************ 00:09:06.679 11:17:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:06.679 [2024-12-10 11:17:28.648162] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:06.679 [2024-12-10 11:17:28.648572] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:09:06.679 [2024-12-10 11:17:28.833214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.938 [2024-12-10 11:17:28.956961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.938 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:08.315 [2024-12-10T11:17:30.482Z] ====================================== 00:09:08.315 [2024-12-10T11:17:30.482Z] busy:2212739389 (cyc) 00:09:08.315 [2024-12-10T11:17:30.482Z] total_run_count: 273000 00:09:08.315 [2024-12-10T11:17:30.482Z] tsc_hz: 2200000000 (cyc) 00:09:08.315 [2024-12-10T11:17:30.482Z] ====================================== 00:09:08.315 [2024-12-10T11:17:30.482Z] poller_cost: 8105 (cyc), 3684 (nsec) 00:09:08.315 00:09:08.315 real 0m1.643s 00:09:08.315 user 0m1.443s 00:09:08.315 sys 0m0.088s 00:09:08.315 11:17:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.315 ************************************ 00:09:08.315 END TEST thread_poller_perf 00:09:08.315 ************************************ 00:09:08.315 11:17:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:08.315 11:17:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:08.315 11:17:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:08.315 11:17:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.315 11:17:30 thread -- common/autotest_common.sh@10 -- # set +x 00:09:08.315 ************************************ 00:09:08.315 START TEST thread_poller_perf 00:09:08.315 ************************************ 00:09:08.315 11:17:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:08.315 [2024-12-10 11:17:30.339067] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:08.315 [2024-12-10 11:17:30.339495] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:09:08.573 [2024-12-10 11:17:30.526997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.573 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:08.573 [2024-12-10 11:17:30.650466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.952 [2024-12-10T11:17:32.119Z] ====================================== 00:09:09.952 [2024-12-10T11:17:32.119Z] busy:2204770103 (cyc) 00:09:09.952 [2024-12-10T11:17:32.119Z] total_run_count: 3330000 00:09:09.952 [2024-12-10T11:17:32.119Z] tsc_hz: 2200000000 (cyc) 00:09:09.952 [2024-12-10T11:17:32.119Z] ====================================== 00:09:09.952 [2024-12-10T11:17:32.119Z] poller_cost: 662 (cyc), 300 (nsec) 00:09:09.952 00:09:09.952 real 0m1.582s 00:09:09.952 user 0m1.377s 00:09:09.952 sys 0m0.095s 00:09:09.952 ************************************ 00:09:09.952 END TEST thread_poller_perf 00:09:09.952 ************************************ 00:09:09.952 11:17:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.952 11:17:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 11:17:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:09.952 00:09:09.952 real 0m3.549s 00:09:09.952 user 0m3.013s 00:09:09.952 sys 0m0.309s 00:09:09.952 ************************************ 00:09:09.952 END TEST thread 00:09:09.952 ************************************ 00:09:09.952 11:17:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.952 11:17:31 thread -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 11:17:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:09.952 11:17:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:09.952 11:17:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.952 11:17:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.952 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 ************************************ 00:09:09.952 START TEST app_cmdline 00:09:09.952 ************************************ 00:09:09.952 11:17:31 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:09.952 * Looking for test storage... 00:09:09.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:09.952 11:17:32 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.952 11:17:32 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.952 11:17:32 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.211 11:17:32 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.211 11:17:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:10.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.211 11:17:32 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.211 11:17:32 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.211 --rc genhtml_branch_coverage=1 00:09:10.211 --rc genhtml_function_coverage=1 00:09:10.211 --rc genhtml_legend=1 00:09:10.211 --rc geninfo_all_blocks=1 00:09:10.211 --rc geninfo_unexecuted_blocks=1 00:09:10.211 00:09:10.211 ' 00:09:10.211 11:17:32 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.211 --rc genhtml_branch_coverage=1 00:09:10.211 --rc genhtml_function_coverage=1 00:09:10.211 --rc genhtml_legend=1 00:09:10.211 --rc geninfo_all_blocks=1 00:09:10.212 --rc geninfo_unexecuted_blocks=1 00:09:10.212 00:09:10.212 ' 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.212 --rc genhtml_branch_coverage=1 00:09:10.212 --rc genhtml_function_coverage=1 00:09:10.212 --rc genhtml_legend=1 00:09:10.212 --rc geninfo_all_blocks=1 00:09:10.212 --rc geninfo_unexecuted_blocks=1 00:09:10.212 00:09:10.212 ' 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.212 --rc genhtml_branch_coverage=1 00:09:10.212 --rc genhtml_function_coverage=1 00:09:10.212 --rc genhtml_legend=1 00:09:10.212 --rc geninfo_all_blocks=1 00:09:10.212 --rc geninfo_unexecuted_blocks=1 00:09:10.212 00:09:10.212 ' 00:09:10.212 11:17:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:10.212 11:17:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61173 00:09:10.212 11:17:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61173 00:09:10.212 11:17:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61173 ']' 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.212 11:17:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:10.212 [2024-12-10 11:17:32.256433] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:10.212 [2024-12-10 11:17:32.256705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:09:10.470 [2024-12-10 11:17:32.442006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.470 [2024-12-10 11:17:32.567237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.405 11:17:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.405 11:17:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:11.405 11:17:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:11.664 { 00:09:11.664 "version": "SPDK v25.01-pre git sha1 92d1e663a", 00:09:11.664 "fields": { 00:09:11.664 "major": 25, 00:09:11.664 "minor": 1, 00:09:11.664 "patch": 0, 00:09:11.664 "suffix": "-pre", 00:09:11.664 "commit": "92d1e663a" 00:09:11.664 } 00:09:11.664 } 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:11.664 11:17:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.664 11:17:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.923 request: 00:09:11.923 { 00:09:11.923 "method": "env_dpdk_get_mem_stats", 00:09:11.923 "req_id": 1 00:09:11.923 } 00:09:11.923 Got JSON-RPC error response 00:09:11.923 response: 00:09:11.923 { 00:09:11.923 "code": -32601, 00:09:11.923 "message": "Method not found" 00:09:11.923 } 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.181 11:17:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61173 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61173 ']' 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61173 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61173 00:09:12.181 killing process with pid 61173 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61173' 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@973 -- # kill 61173 00:09:12.181 11:17:34 app_cmdline -- common/autotest_common.sh@978 -- # wait 61173 00:09:14.088 00:09:14.088 real 0m4.224s 00:09:14.088 user 0m4.846s 00:09:14.088 sys 0m0.520s 00:09:14.088 11:17:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.088 11:17:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:14.088 ************************************ 00:09:14.088 END TEST app_cmdline 00:09:14.088 ************************************ 00:09:14.088 11:17:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:14.088 11:17:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.088 11:17:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.088 11:17:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.088 ************************************ 00:09:14.088 START TEST version 00:09:14.088 ************************************ 00:09:14.088 11:17:36 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:14.347 * Looking for test storage... 00:09:14.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:14.347 11:17:36 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.347 11:17:36 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.347 11:17:36 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.347 11:17:36 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.347 11:17:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.347 11:17:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.347 11:17:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.348 11:17:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.348 11:17:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.348 11:17:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.348 11:17:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.348 11:17:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.348 11:17:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.348 11:17:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.348 11:17:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.348 11:17:36 version -- scripts/common.sh@344 -- # case "$op" in 00:09:14.348 11:17:36 version -- scripts/common.sh@345 -- # : 1 00:09:14.348 11:17:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.348 11:17:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.348 11:17:36 version -- scripts/common.sh@365 -- # decimal 1 00:09:14.348 11:17:36 version -- scripts/common.sh@353 -- # local d=1 00:09:14.348 11:17:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.348 11:17:36 version -- scripts/common.sh@355 -- # echo 1 00:09:14.348 11:17:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.348 11:17:36 version -- scripts/common.sh@366 -- # decimal 2 00:09:14.348 11:17:36 version -- scripts/common.sh@353 -- # local d=2 00:09:14.348 11:17:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.348 11:17:36 version -- scripts/common.sh@355 -- # echo 2 00:09:14.348 11:17:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.348 11:17:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.348 11:17:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.348 11:17:36 version -- scripts/common.sh@368 -- # return 0 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.348 --rc genhtml_branch_coverage=1 00:09:14.348 --rc genhtml_function_coverage=1 00:09:14.348 --rc genhtml_legend=1 00:09:14.348 --rc geninfo_all_blocks=1 00:09:14.348 --rc geninfo_unexecuted_blocks=1 00:09:14.348 00:09:14.348 ' 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.348 --rc genhtml_branch_coverage=1 00:09:14.348 --rc genhtml_function_coverage=1 00:09:14.348 --rc genhtml_legend=1 00:09:14.348 --rc geninfo_all_blocks=1 00:09:14.348 --rc geninfo_unexecuted_blocks=1 00:09:14.348 00:09:14.348 ' 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.348 --rc genhtml_branch_coverage=1 00:09:14.348 --rc genhtml_function_coverage=1 00:09:14.348 --rc genhtml_legend=1 00:09:14.348 --rc geninfo_all_blocks=1 00:09:14.348 --rc geninfo_unexecuted_blocks=1 00:09:14.348 00:09:14.348 ' 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.348 --rc genhtml_branch_coverage=1 00:09:14.348 --rc genhtml_function_coverage=1 00:09:14.348 --rc genhtml_legend=1 00:09:14.348 --rc geninfo_all_blocks=1 00:09:14.348 --rc geninfo_unexecuted_blocks=1 00:09:14.348 00:09:14.348 ' 00:09:14.348 11:17:36 version -- app/version.sh@17 -- # get_header_version major 00:09:14.348 11:17:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # cut -f2 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # tr -d '"' 00:09:14.348 11:17:36 version -- app/version.sh@17 -- # major=25 00:09:14.348 11:17:36 version -- app/version.sh@18 -- # get_header_version minor 00:09:14.348 11:17:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # cut -f2 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # tr -d '"' 00:09:14.348 11:17:36 version -- app/version.sh@18 -- # minor=1 00:09:14.348 11:17:36 version -- app/version.sh@19 -- # get_header_version patch 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # cut -f2 00:09:14.348 11:17:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # tr -d '"' 00:09:14.348 11:17:36 version -- app/version.sh@19 -- # patch=0 00:09:14.348 11:17:36 version -- app/version.sh@20 -- # get_header_version suffix 00:09:14.348 11:17:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # cut -f2 00:09:14.348 11:17:36 version -- app/version.sh@14 -- # tr -d '"' 00:09:14.348 11:17:36 version -- app/version.sh@20 -- # suffix=-pre 00:09:14.348 11:17:36 version -- app/version.sh@22 -- # version=25.1 00:09:14.348 11:17:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:14.348 11:17:36 version -- app/version.sh@28 -- # version=25.1rc0 00:09:14.348 11:17:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:14.348 11:17:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:14.348 11:17:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:14.348 11:17:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:14.348 00:09:14.348 real 0m0.246s 00:09:14.348 user 0m0.158s 00:09:14.348 sys 0m0.117s 00:09:14.348 ************************************ 00:09:14.348 END TEST version 00:09:14.348 ************************************ 00:09:14.348 11:17:36 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.348 11:17:36 version -- common/autotest_common.sh@10 -- # set +x 00:09:14.615 11:17:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:14.615 11:17:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:14.615 11:17:36 -- spdk/autotest.sh@194 -- # uname -s 00:09:14.615 11:17:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:14.615 11:17:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:14.615 11:17:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:14.615 11:17:36 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:14.615 11:17:36 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:14.615 11:17:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.615 11:17:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.615 11:17:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.615 ************************************ 00:09:14.615 START TEST blockdev_nvme 00:09:14.615 ************************************ 00:09:14.615 11:17:36 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:14.615 * Looking for test storage... 00:09:14.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:14.615 11:17:36 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.615 11:17:36 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.615 11:17:36 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.615 11:17:36 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.615 11:17:36 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.616 11:17:36 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.616 --rc genhtml_branch_coverage=1 00:09:14.616 --rc genhtml_function_coverage=1 00:09:14.616 --rc genhtml_legend=1 00:09:14.616 --rc geninfo_all_blocks=1 00:09:14.616 --rc geninfo_unexecuted_blocks=1 00:09:14.616 00:09:14.616 ' 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.616 --rc genhtml_branch_coverage=1 00:09:14.616 --rc genhtml_function_coverage=1 00:09:14.616 --rc genhtml_legend=1 00:09:14.616 --rc geninfo_all_blocks=1 00:09:14.616 --rc geninfo_unexecuted_blocks=1 00:09:14.616 00:09:14.616 ' 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.616 --rc genhtml_branch_coverage=1 00:09:14.616 --rc genhtml_function_coverage=1 00:09:14.616 --rc genhtml_legend=1 00:09:14.616 --rc geninfo_all_blocks=1 00:09:14.616 --rc geninfo_unexecuted_blocks=1 00:09:14.616 00:09:14.616 ' 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.616 --rc genhtml_branch_coverage=1 00:09:14.616 --rc genhtml_function_coverage=1 00:09:14.616 --rc genhtml_legend=1 00:09:14.616 --rc geninfo_all_blocks=1 00:09:14.616 --rc geninfo_unexecuted_blocks=1 00:09:14.616 00:09:14.616 ' 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:14.616 11:17:36 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61357 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:14.616 11:17:36 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61357 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61357 ']' 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.616 11:17:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.875 [2024-12-10 11:17:36.847336] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:14.875 [2024-12-10 11:17:36.847749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:09:14.875 [2024-12-10 11:17:37.030380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.134 [2024-12-10 11:17:37.137043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.070 11:17:37 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.070 11:17:37 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.070 11:17:37 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:16.070 11:17:37 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.070 11:17:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.328 11:17:38 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:16.328 11:17:38 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:16.329 11:17:38 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "08ac1078-f55c-4439-9905-477109b248c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "08ac1078-f55c-4439-9905-477109b248c7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "63f8c202-915b-4ea9-bca8-0aed5fcbbad5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "63f8c202-915b-4ea9-bca8-0aed5fcbbad5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "00e4e771-8b98-46d2-be96-bc938ee24733"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "00e4e771-8b98-46d2-be96-bc938ee24733",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4304cbfc-2ff9-4b8e-ba37-fc58afbcb1de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4304cbfc-2ff9-4b8e-ba37-fc58afbcb1de",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "82741325-58dc-40df-b7c0-5ed8fa649084"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82741325-58dc-40df-b7c0-5ed8fa649084",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b0f78b7b-2c4e-4c7f-a21f-e51e27b02ddd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b0f78b7b-2c4e-4c7f-a21f-e51e27b02ddd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:16.587 11:17:38 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:16.587 11:17:38 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:16.587 11:17:38 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:16.587 11:17:38 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61357 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61357 ']' 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61357 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61357 00:09:16.587 killing process with pid 61357 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61357' 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61357 00:09:16.587 11:17:38 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61357 00:09:18.489 11:17:40 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:18.489 11:17:40 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:18.489 11:17:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:18.489 11:17:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.489 11:17:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.489 ************************************ 00:09:18.489 START TEST bdev_hello_world 00:09:18.489 ************************************ 00:09:18.489 11:17:40 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:18.748 [2024-12-10 11:17:40.741572] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:18.748 [2024-12-10 11:17:40.741815] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:09:19.007 [2024-12-10 11:17:40.929945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.007 [2024-12-10 11:17:41.057792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.575 [2024-12-10 11:17:41.718092] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:19.575 [2024-12-10 11:17:41.718160] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:19.575 [2024-12-10 11:17:41.718190] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:19.575 [2024-12-10 11:17:41.721318] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:19.575 [2024-12-10 11:17:41.721903] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:19.575 [2024-12-10 11:17:41.721949] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:19.575 [2024-12-10 11:17:41.722151] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:19.575 00:09:19.575 [2024-12-10 11:17:41.722187] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:20.952 00:09:20.952 real 0m2.092s 00:09:20.952 user 0m1.723s 00:09:20.952 sys 0m0.256s 00:09:20.952 11:17:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.952 11:17:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 ************************************ 00:09:20.952 END TEST bdev_hello_world 00:09:20.952 ************************************ 00:09:20.952 11:17:42 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:20.952 11:17:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.952 11:17:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.952 11:17:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 ************************************ 00:09:20.952 START TEST bdev_bounds 00:09:20.952 ************************************ 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:20.952 Process bdevio pid: 61494 00:09:20.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61494 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61494' 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61494 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61494 ']' 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.952 11:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 [2024-12-10 11:17:42.882993] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:20.952 [2024-12-10 11:17:42.883446] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:09:20.952 [2024-12-10 11:17:43.065490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.211 [2024-12-10 11:17:43.173523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.211 [2024-12-10 11:17:43.173682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.211 [2024-12-10 11:17:43.173693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.780 11:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.780 11:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:21.780 11:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:22.059 I/O targets: 00:09:22.059 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:22.059 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:22.059 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:22.059 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:22.059 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:22.059 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:22.059 00:09:22.059 00:09:22.059 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.059 http://cunit.sourceforge.net/ 00:09:22.059 00:09:22.059 00:09:22.059 Suite: bdevio tests on: Nvme3n1 00:09:22.059 Test: blockdev write read block ...passed 00:09:22.059 Test: blockdev write zeroes read block ...passed 00:09:22.059 Test: blockdev write zeroes read no split ...passed 00:09:22.059 Test: blockdev write zeroes read split ...passed 00:09:22.059 Test: blockdev write zeroes read split partial ...passed 00:09:22.059 Test: blockdev reset ...[2024-12-10 11:17:44.112531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:22.059 [2024-12-10 11:17:44.116730] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:22.059 passed 00:09:22.059 Test: blockdev write read 8 blocks ...passed 00:09:22.059 Test: blockdev write read size > 128k ...passed 00:09:22.059 Test: blockdev write read invalid size ...passed 00:09:22.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.059 Test: blockdev write read max offset ...passed 00:09:22.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.059 Test: blockdev writev readv 8 blocks ...passed 00:09:22.059 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.059 Test: blockdev writev readv block ...passed 00:09:22.059 Test: blockdev writev readv size > 128k ...passed 00:09:22.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.059 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.124616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be20a000 len:0x1000 00:09:22.059 [2024-12-10 11:17:44.124691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:22.059 passed 00:09:22.059 Test: blockdev nvme passthru rw ...passed 00:09:22.059 Test: blockdev nvme passthru vendor specific ...passed 00:09:22.059 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.125561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:22.059 [2024-12-10 11:17:44.125608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:22.059 passed 00:09:22.059 Test: blockdev copy ...passed 00:09:22.059 Suite: bdevio tests on: Nvme2n3 00:09:22.059 Test: blockdev write read block ...passed 00:09:22.059 Test: blockdev write zeroes read block ...passed 00:09:22.059 Test: blockdev write zeroes read no split ...passed 00:09:22.059 Test: blockdev write zeroes read split ...passed 00:09:22.059 Test: blockdev write zeroes read split partial ...passed 00:09:22.059 Test: blockdev reset ...[2024-12-10 11:17:44.204264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:22.059 passed 00:09:22.059 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.209039] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:22.059 passed 00:09:22.059 Test: blockdev write read size > 128k ...passed 00:09:22.059 Test: blockdev write read invalid size ...passed 00:09:22.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.059 Test: blockdev write read max offset ...passed 00:09:22.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.059 Test: blockdev writev readv 8 blocks ...passed 00:09:22.059 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.059 Test: blockdev writev readv block ...passed 00:09:22.059 Test: blockdev writev readv size > 128k ...passed 00:09:22.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.318 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.218683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:09:22.318 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2a0c06000 len:0x1000 00:09:22.318 [2024-12-10 11:17:44.218894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:22.318 passed 00:09:22.318 Test: blockdev nvme passthru vendor specific ...passed 00:09:22.319 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.219675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:22.319 [2024-12-10 11:17:44.219716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:22.319 passed 00:09:22.319 Test: blockdev copy ...passed 00:09:22.319 Suite: bdevio tests on: Nvme2n2 00:09:22.319 Test: blockdev write read block ...passed 00:09:22.319 Test: blockdev write zeroes read block ...passed 00:09:22.319 Test: blockdev write zeroes read no split ...passed 00:09:22.319 Test: blockdev write zeroes read split ...passed 00:09:22.319 Test: blockdev write zeroes read split partial ...passed 00:09:22.319 Test: blockdev reset ...[2024-12-10 11:17:44.315549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:22.319 passed 00:09:22.319 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.319814] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:22.319 passed 00:09:22.319 Test: blockdev write read size > 128k ...passed 00:09:22.319 Test: blockdev write read invalid size ...passed 00:09:22.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.319 Test: blockdev write read max offset ...passed 00:09:22.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.319 Test: blockdev writev readv 8 blocks ...passed 00:09:22.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.319 Test: blockdev writev readv block ...passed 00:09:22.319 Test: blockdev writev readv size > 128k ...passed 00:09:22.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.319 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.328299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce23c000 len:0x1000 00:09:22.319 [2024-12-10 11:17:44.328367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:22.319 passed 00:09:22.319 Test: blockdev nvme passthru rw ...passed 00:09:22.319 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:17:44.329349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:22.319 [2024-12-10 11:17:44.329392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:22.319 passed 00:09:22.319 Test: blockdev nvme admin passthru ...passed 00:09:22.319 Test: blockdev copy ...passed 00:09:22.319 Suite: bdevio tests on: Nvme2n1 00:09:22.319 Test: blockdev write read block ...passed 00:09:22.319 Test: blockdev write zeroes read block ...passed 00:09:22.319 Test: blockdev write zeroes read no split ...passed 00:09:22.319 Test: blockdev write zeroes read split ...passed 00:09:22.319 Test: blockdev write zeroes read split partial ...passed 00:09:22.319 Test: blockdev reset ...[2024-12-10 11:17:44.396790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:22.319 [2024-12-10 11:17:44.401020] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:22.319 Test: blockdev write read 8 blocks ...uccessful. 00:09:22.319 passed 00:09:22.319 Test: blockdev write read size > 128k ...passed 00:09:22.319 Test: blockdev write read invalid size ...passed 00:09:22.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.319 Test: blockdev write read max offset ...passed 00:09:22.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.319 Test: blockdev writev readv 8 blocks ...passed 00:09:22.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.319 Test: blockdev writev readv block ...passed 00:09:22.319 Test: blockdev writev readv size > 128k ...passed 00:09:22.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.319 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.409215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce238000 len:0x1000 00:09:22.319 [2024-12-10 11:17:44.409284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:22.319 passed 00:09:22.319 Test: blockdev nvme passthru rw ...passed 00:09:22.319 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:17:44.410120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:09:22.319 Test: blockdev nvme admin passthru ...RP2 0x0 00:09:22.319 [2024-12-10 11:17:44.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:22.319 passed 00:09:22.319 Test: blockdev copy ...passed 00:09:22.319 Suite: bdevio tests on: Nvme1n1 00:09:22.319 Test: blockdev write read block ...passed 00:09:22.319 Test: blockdev write zeroes read block ...passed 00:09:22.319 Test: blockdev write zeroes read no split ...passed 00:09:22.319 Test: blockdev write zeroes read split ...passed 00:09:22.578 Test: blockdev write zeroes read split partial ...passed 00:09:22.578 Test: blockdev reset ...[2024-12-10 11:17:44.488386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:22.578 passed 00:09:22.578 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.492137] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:22.578 passed 00:09:22.578 Test: blockdev write read size > 128k ...passed 00:09:22.578 Test: blockdev write read invalid size ...passed 00:09:22.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.578 Test: blockdev write read max offset ...passed 00:09:22.578 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.578 Test: blockdev writev readv 8 blocks ...passed 00:09:22.578 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.578 Test: blockdev writev readv block ...passed 00:09:22.578 Test: blockdev writev readv size > 128k ...passed 00:09:22.578 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.578 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.500779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:22.578 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce234000 len:0x1000 00:09:22.578 [2024-12-10 11:17:44.500971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:22.578 passed 00:09:22.578 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:17:44.501796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:22.578 [2024-12-10 11:17:44.501841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:22.578 passed 00:09:22.578 Test: blockdev nvme admin passthru ...passed 00:09:22.578 Test: blockdev copy ...passed 00:09:22.578 Suite: bdevio tests on: Nvme0n1 00:09:22.578 Test: blockdev write read block ...passed 00:09:22.578 Test: blockdev write zeroes read block ...passed 00:09:22.578 Test: blockdev write zeroes read no split ...passed 00:09:22.578 Test: blockdev write zeroes read split ...passed 00:09:22.578 Test: blockdev write zeroes read split partial ...passed 00:09:22.578 Test: blockdev reset ...[2024-12-10 11:17:44.579929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:22.578 [2024-12-10 11:17:44.583804] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:09:22.578 Test: blockdev write read 8 blocks ...uccessful. 00:09:22.578 passed 00:09:22.578 Test: blockdev write read size > 128k ...passed 00:09:22.578 Test: blockdev write read invalid size ...passed 00:09:22.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.578 Test: blockdev write read max offset ...passed 00:09:22.578 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.578 Test: blockdev writev readv 8 blocks ...passed 00:09:22.578 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.578 Test: blockdev writev readv block ...passed 00:09:22.578 Test: blockdev writev readv size > 128k ...passed 00:09:22.578 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.578 Test: blockdev comparev and writev ...passed 00:09:22.579 Test: blockdev nvme passthru rw ...[2024-12-10 11:17:44.591407] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:22.579 separate metadata which is not supported yet. 00:09:22.579 passed 00:09:22.579 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:17:44.591967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:09:22.579 Test: blockdev nvme admin passthru ...RP2 0x0 00:09:22.579 [2024-12-10 11:17:44.592136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:22.579 passed 00:09:22.579 Test: blockdev copy ...passed 00:09:22.579 00:09:22.579 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.579 suites 6 6 n/a 0 0 00:09:22.579 tests 138 138 138 0 0 00:09:22.579 asserts 893 893 893 0 n/a 00:09:22.579 00:09:22.579 Elapsed time = 1.483 seconds 00:09:22.579 0 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61494 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61494 ']' 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61494 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61494 00:09:22.579 killing process with pid 61494 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61494' 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61494 00:09:22.579 11:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61494 00:09:23.513 11:17:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:23.513 00:09:23.513 real 0m2.790s 00:09:23.513 user 0m7.323s 00:09:23.513 sys 0m0.371s 00:09:23.513 11:17:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.513 11:17:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:23.513 ************************************ 00:09:23.513 END TEST bdev_bounds 00:09:23.513 ************************************ 00:09:23.513 11:17:45 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:23.513 11:17:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.513 11:17:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.513 11:17:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.513 ************************************ 00:09:23.513 START TEST bdev_nbd 00:09:23.513 ************************************ 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:23.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61554 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61554 /var/tmp/spdk-nbd.sock 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61554 ']' 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.513 11:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:23.772 [2024-12-10 11:17:45.727377] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:23.772 [2024-12-10 11:17:45.727798] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.772 [2024-12-10 11:17:45.913720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.031 [2024-12-10 11:17:46.041171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.597 11:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:24.598 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:24.856 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:24.856 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:24.856 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:24.856 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:24.856 11:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.114 1+0 records in 00:09:25.114 1+0 records out 00:09:25.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604115 s, 6.8 MB/s 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.114 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.427 1+0 records in 00:09:25.427 1+0 records out 00:09:25.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536374 s, 7.6 MB/s 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.427 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.709 1+0 records in 00:09:25.709 1+0 records out 00:09:25.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563152 s, 7.3 MB/s 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.709 11:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:25.968 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.969 1+0 records in 00:09:25.969 1+0 records out 00:09:25.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720287 s, 5.7 MB/s 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.969 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.537 1+0 records in 00:09:26.537 1+0 records out 00:09:26.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755006 s, 5.4 MB/s 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:26.537 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.796 1+0 records in 00:09:26.796 1+0 records out 00:09:26.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597597 s, 6.9 MB/s 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:26.796 11:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd0", 00:09:27.055 "bdev_name": "Nvme0n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd1", 00:09:27.055 "bdev_name": "Nvme1n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd2", 00:09:27.055 "bdev_name": "Nvme2n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd3", 00:09:27.055 "bdev_name": "Nvme2n2" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd4", 00:09:27.055 "bdev_name": "Nvme2n3" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd5", 00:09:27.055 "bdev_name": "Nvme3n1" 00:09:27.055 } 00:09:27.055 ]' 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd0", 00:09:27.055 "bdev_name": "Nvme0n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd1", 00:09:27.055 "bdev_name": "Nvme1n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd2", 00:09:27.055 "bdev_name": "Nvme2n1" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd3", 00:09:27.055 "bdev_name": "Nvme2n2" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd4", 00:09:27.055 "bdev_name": "Nvme2n3" 00:09:27.055 }, 00:09:27.055 { 00:09:27.055 "nbd_device": "/dev/nbd5", 00:09:27.055 "bdev_name": "Nvme3n1" 00:09:27.055 } 00:09:27.055 ]' 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.055 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.314 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.573 11:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.140 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.399 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.657 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.916 11:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.174 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.174 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.174 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:29.447 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:29.717 /dev/nbd0 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.717 1+0 records in 00:09:29.717 1+0 records out 00:09:29.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673647 s, 6.1 MB/s 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.717 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:29.718 11:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:29.976 /dev/nbd1 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.976 1+0 records in 00:09:29.976 1+0 records out 00:09:29.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677474 s, 6.0 MB/s 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:29.976 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:30.234 /dev/nbd10 00:09:30.493 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:30.493 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:30.493 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.494 1+0 records in 00:09:30.494 1+0 records out 00:09:30.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458245 s, 8.9 MB/s 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.494 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:30.752 /dev/nbd11 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.752 1+0 records in 00:09:30.752 1+0 records out 00:09:30.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675544 s, 6.1 MB/s 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.752 11:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:31.011 /dev/nbd12 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.011 1+0 records in 00:09:31.011 1+0 records out 00:09:31.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577663 s, 7.1 MB/s 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.011 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:31.269 /dev/nbd13 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.269 1+0 records in 00:09:31.269 1+0 records out 00:09:31.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609109 s, 6.7 MB/s 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.269 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.836 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd0", 00:09:31.836 "bdev_name": "Nvme0n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd1", 00:09:31.836 "bdev_name": "Nvme1n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd10", 00:09:31.836 "bdev_name": "Nvme2n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd11", 00:09:31.836 "bdev_name": "Nvme2n2" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd12", 00:09:31.836 "bdev_name": "Nvme2n3" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd13", 00:09:31.836 "bdev_name": "Nvme3n1" 00:09:31.836 } 00:09:31.836 ]' 00:09:31.836 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd0", 00:09:31.836 "bdev_name": "Nvme0n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd1", 00:09:31.836 "bdev_name": "Nvme1n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd10", 00:09:31.836 "bdev_name": "Nvme2n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd11", 00:09:31.836 "bdev_name": "Nvme2n2" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd12", 00:09:31.836 "bdev_name": "Nvme2n3" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd13", 00:09:31.836 "bdev_name": "Nvme3n1" 00:09:31.836 } 00:09:31.836 ]' 00:09:31.836 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.836 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:31.836 /dev/nbd1 00:09:31.836 /dev/nbd10 00:09:31.836 /dev/nbd11 00:09:31.837 /dev/nbd12 00:09:31.837 /dev/nbd13' 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:31.837 /dev/nbd1 00:09:31.837 /dev/nbd10 00:09:31.837 /dev/nbd11 00:09:31.837 /dev/nbd12 00:09:31.837 /dev/nbd13' 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:31.837 256+0 records in 00:09:31.837 256+0 records out 00:09:31.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653207 s, 161 MB/s 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.837 256+0 records in 00:09:31.837 256+0 records out 00:09:31.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16399 s, 6.4 MB/s 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.837 11:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.096 256+0 records in 00:09:32.096 256+0 records out 00:09:32.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151685 s, 6.9 MB/s 00:09:32.096 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.096 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:32.354 256+0 records in 00:09:32.354 256+0 records out 00:09:32.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147123 s, 7.1 MB/s 00:09:32.354 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.354 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:32.354 256+0 records in 00:09:32.354 256+0 records out 00:09:32.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158118 s, 6.6 MB/s 00:09:32.354 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.354 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:32.612 256+0 records in 00:09:32.612 256+0 records out 00:09:32.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146869 s, 7.1 MB/s 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:32.612 256+0 records in 00:09:32.612 256+0 records out 00:09:32.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147515 s, 7.1 MB/s 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.612 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.872 11:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.129 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.474 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.733 11:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.992 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.250 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.509 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:35.076 11:17:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:35.076 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.076 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:35.076 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:35.334 malloc_lvol_verify 00:09:35.334 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:35.593 f78a6886-1efd-441f-8dd6-194e6caf17e7 00:09:35.593 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:35.851 be72e769-e578-4d83-aa07-787f66d9e035 00:09:35.851 11:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:36.110 /dev/nbd0 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:36.110 mke2fs 1.47.0 (5-Feb-2023) 00:09:36.110 Discarding device blocks: 0/4096 done 00:09:36.110 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:36.110 00:09:36.110 Allocating group tables: 0/1 done 00:09:36.110 Writing inode tables: 0/1 done 00:09:36.110 Creating journal (1024 blocks): done 00:09:36.110 Writing superblocks and filesystem accounting information: 0/1 done 00:09:36.110 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.110 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61554 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61554 ']' 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61554 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61554 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.370 killing process with pid 61554 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61554' 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61554 00:09:36.370 11:17:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61554 00:09:37.743 11:17:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:37.743 00:09:37.743 real 0m13.930s 00:09:37.743 user 0m20.536s 00:09:37.744 sys 0m4.153s 00:09:37.744 11:17:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.744 ************************************ 00:09:37.744 END TEST bdev_nbd 00:09:37.744 ************************************ 00:09:37.744 11:17:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:37.744 11:17:59 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:37.744 skipping fio tests on NVMe due to multi-ns failures. 00:09:37.744 11:17:59 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:09:37.744 11:17:59 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:37.744 11:17:59 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:37.744 11:17:59 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:37.744 11:17:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:37.744 11:17:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.744 11:17:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.744 ************************************ 00:09:37.744 START TEST bdev_verify 00:09:37.744 ************************************ 00:09:37.744 11:17:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:37.744 [2024-12-10 11:17:59.709948] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:37.744 [2024-12-10 11:17:59.710132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61970 ] 00:09:37.744 [2024-12-10 11:17:59.895227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.001 [2024-12-10 11:18:00.003700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.001 [2024-12-10 11:18:00.003700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.569 Running I/O for 5 seconds... 00:09:40.976 19456.00 IOPS, 76.00 MiB/s [2024-12-10T11:18:04.077Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-10T11:18:05.011Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-10T11:18:05.947Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-10T11:18:05.947Z] 18444.80 IOPS, 72.05 MiB/s 00:09:43.780 Latency(us) 00:09:43.780 [2024-12-10T11:18:05.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.780 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0xbd0bd 00:09:43.780 Nvme0n1 : 5.06 1541.93 6.02 0.00 0.00 82792.99 17277.67 79596.45 00:09:43.780 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:43.780 Nvme0n1 : 5.06 1517.00 5.93 0.00 0.00 84141.64 16562.73 82456.20 00:09:43.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0xa0000 00:09:43.780 Nvme1n1 : 5.07 1541.37 6.02 0.00 0.00 82620.75 15490.33 73876.95 00:09:43.780 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0xa0000 length 0xa0000 00:09:43.780 Nvme1n1 : 5.06 1516.42 5.92 0.00 0.00 84040.54 19065.02 79119.83 00:09:43.780 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0x80000 00:09:43.780 Nvme2n1 : 5.07 1540.59 6.02 0.00 0.00 82516.90 16086.11 71493.82 00:09:43.780 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x80000 length 0x80000 00:09:43.780 Nvme2n1 : 5.07 1514.70 5.92 0.00 0.00 83931.59 21328.99 75306.82 00:09:43.780 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0x80000 00:09:43.780 Nvme2n2 : 5.07 1540.10 6.02 0.00 0.00 82374.73 15490.33 69587.32 00:09:43.780 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x80000 length 0x80000 00:09:43.780 Nvme2n2 : 5.07 1513.75 5.91 0.00 0.00 83801.76 22520.55 72447.07 00:09:43.780 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0x80000 00:09:43.780 Nvme2n3 : 5.07 1539.12 6.01 0.00 0.00 82243.53 16920.20 72447.07 00:09:43.780 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x80000 length 0x80000 00:09:43.780 Nvme2n3 : 5.08 1512.68 5.91 0.00 0.00 83671.99 20614.05 77689.95 00:09:43.780 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x0 length 0x20000 00:09:43.780 Nvme3n1 : 5.08 1538.04 6.01 0.00 0.00 82109.20 13166.78 75306.82 00:09:43.780 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.780 Verification LBA range: start 0x20000 length 0x20000 00:09:43.780 Nvme3n1 : 5.08 1511.77 5.91 0.00 0.00 83528.81 11439.01 81502.95 00:09:43.780 [2024-12-10T11:18:05.947Z] =================================================================================================================== 00:09:43.780 [2024-12-10T11:18:05.947Z] Total : 18327.45 71.59 0.00 0.00 83142.04 11439.01 82456.20 00:09:45.156 00:09:45.156 real 0m7.522s 00:09:45.156 user 0m13.888s 00:09:45.156 sys 0m0.279s 00:09:45.156 11:18:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.156 11:18:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:45.156 ************************************ 00:09:45.156 END TEST bdev_verify 00:09:45.156 ************************************ 00:09:45.156 11:18:07 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:45.156 11:18:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:45.156 11:18:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.156 11:18:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.156 ************************************ 00:09:45.156 START TEST bdev_verify_big_io 00:09:45.156 ************************************ 00:09:45.156 11:18:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:45.156 [2024-12-10 11:18:07.297869] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:45.156 [2024-12-10 11:18:07.298052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62074 ] 00:09:45.415 [2024-12-10 11:18:07.486176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.673 [2024-12-10 11:18:07.615334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.673 [2024-12-10 11:18:07.615338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.609 Running I/O for 5 seconds... 00:09:51.299 849.00 IOPS, 53.06 MiB/s [2024-12-10T11:18:14.401Z] 2183.50 IOPS, 136.47 MiB/s [2024-12-10T11:18:14.401Z] 2766.33 IOPS, 172.90 MiB/s 00:09:52.234 Latency(us) 00:09:52.234 [2024-12-10T11:18:14.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0xbd0b 00:09:52.234 Nvme0n1 : 5.83 119.45 7.47 0.00 0.00 1028516.21 13345.51 1494697.43 00:09:52.234 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:52.234 Nvme0n1 : 5.64 124.90 7.81 0.00 0.00 985101.88 26095.24 991380.95 00:09:52.234 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0xa000 00:09:52.234 Nvme1n1 : 5.83 119.42 7.46 0.00 0.00 997929.41 30980.65 1509949.44 00:09:52.234 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0xa000 length 0xa000 00:09:52.234 Nvme1n1 : 5.76 128.18 8.01 0.00 0.00 934753.58 67680.81 842673.80 00:09:52.234 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0x8000 00:09:52.234 Nvme2n1 : 5.83 122.46 7.65 0.00 0.00 951650.14 51237.24 1532827.46 00:09:52.234 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x8000 length 0x8000 00:09:52.234 Nvme2n1 : 5.76 133.28 8.33 0.00 0.00 883744.43 50998.92 865551.83 00:09:52.234 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0x8000 00:09:52.234 Nvme2n2 : 5.83 122.38 7.65 0.00 0.00 920767.62 67204.19 1563331.49 00:09:52.234 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x8000 length 0x8000 00:09:52.234 Nvme2n2 : 5.76 133.22 8.33 0.00 0.00 856406.26 51952.17 896055.85 00:09:52.234 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0x8000 00:09:52.234 Nvme2n3 : 5.92 133.86 8.37 0.00 0.00 820903.02 17515.99 1586209.51 00:09:52.234 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x8000 length 0x8000 00:09:52.234 Nvme2n3 : 5.84 139.36 8.71 0.00 0.00 796497.71 28240.06 1426063.36 00:09:52.234 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x0 length 0x2000 00:09:52.234 Nvme3n1 : 5.92 147.40 9.21 0.00 0.00 726658.30 1079.85 1601461.53 00:09:52.234 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.234 Verification LBA range: start 0x2000 length 0x2000 00:09:52.234 Nvme3n1 : 5.90 157.32 9.83 0.00 0.00 686599.96 6672.76 922746.88 00:09:52.234 [2024-12-10T11:18:14.401Z] =================================================================================================================== 00:09:52.234 [2024-12-10T11:18:14.401Z] Total : 1581.23 98.83 0.00 0.00 873128.71 1079.85 1601461.53 00:09:54.134 00:09:54.134 real 0m8.782s 00:09:54.134 user 0m16.333s 00:09:54.134 sys 0m0.298s 00:09:54.134 11:18:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.134 11:18:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:54.134 ************************************ 00:09:54.134 END TEST bdev_verify_big_io 00:09:54.134 ************************************ 00:09:54.134 11:18:15 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.134 11:18:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:54.134 11:18:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.134 11:18:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.134 ************************************ 00:09:54.134 START TEST bdev_write_zeroes 00:09:54.134 ************************************ 00:09:54.134 11:18:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.134 [2024-12-10 11:18:16.097649] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:54.134 [2024-12-10 11:18:16.097808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62184 ] 00:09:54.134 [2024-12-10 11:18:16.276353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.392 [2024-12-10 11:18:16.400590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.029 Running I/O for 1 seconds... 00:09:55.963 52608.00 IOPS, 205.50 MiB/s 00:09:55.963 Latency(us) 00:09:55.963 [2024-12-10T11:18:18.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.963 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme0n1 : 1.02 8752.56 34.19 0.00 0.00 14591.29 9115.46 23831.27 00:09:55.963 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme1n1 : 1.02 8741.86 34.15 0.00 0.00 14589.09 11439.01 23473.80 00:09:55.963 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme2n1 : 1.03 8731.27 34.11 0.00 0.00 14562.59 11558.17 22401.40 00:09:55.963 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme2n2 : 1.03 8720.53 34.06 0.00 0.00 14526.03 10128.29 21686.46 00:09:55.963 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme2n3 : 1.03 8709.98 34.02 0.00 0.00 14520.58 9532.51 22282.24 00:09:55.963 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.963 Nvme3n1 : 1.03 8699.49 33.98 0.00 0.00 14490.67 7477.06 24069.59 00:09:55.963 [2024-12-10T11:18:18.130Z] =================================================================================================================== 00:09:55.963 [2024-12-10T11:18:18.130Z] Total : 52355.69 204.51 0.00 0.00 14546.71 7477.06 24069.59 00:09:57.338 00:09:57.338 real 0m3.131s 00:09:57.338 user 0m2.769s 00:09:57.338 sys 0m0.237s 00:09:57.338 11:18:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.338 11:18:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 ************************************ 00:09:57.338 END TEST bdev_write_zeroes 00:09:57.338 ************************************ 00:09:57.338 11:18:19 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.338 11:18:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:57.338 11:18:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.338 11:18:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 ************************************ 00:09:57.338 START TEST bdev_json_nonenclosed 00:09:57.338 ************************************ 00:09:57.338 11:18:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.338 [2024-12-10 11:18:19.272925] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:57.338 [2024-12-10 11:18:19.273075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62243 ] 00:09:57.338 [2024-12-10 11:18:19.447095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.596 [2024-12-10 11:18:19.555660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.596 [2024-12-10 11:18:19.555776] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:57.596 [2024-12-10 11:18:19.555806] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:57.596 [2024-12-10 11:18:19.555821] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:57.854 00:09:57.854 real 0m0.641s 00:09:57.854 user 0m0.422s 00:09:57.854 sys 0m0.112s 00:09:57.854 11:18:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.854 11:18:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:57.854 ************************************ 00:09:57.854 END TEST bdev_json_nonenclosed 00:09:57.854 ************************************ 00:09:57.854 11:18:19 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.854 11:18:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:57.854 11:18:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.854 11:18:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.854 ************************************ 00:09:57.854 START TEST bdev_json_nonarray 00:09:57.854 ************************************ 00:09:57.854 11:18:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.854 [2024-12-10 11:18:19.992746] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:57.854 [2024-12-10 11:18:19.992953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:09:58.112 [2024-12-10 11:18:20.177189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.370 [2024-12-10 11:18:20.312869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.370 [2024-12-10 11:18:20.313007] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:58.370 [2024-12-10 11:18:20.313040] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:58.370 [2024-12-10 11:18:20.313057] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.629 00:09:58.629 real 0m0.770s 00:09:58.629 user 0m0.532s 00:09:58.629 sys 0m0.130s 00:09:58.629 ************************************ 00:09:58.629 END TEST bdev_json_nonarray 00:09:58.629 ************************************ 00:09:58.629 11:18:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.629 11:18:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:58.629 11:18:20 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:58.629 00:09:58.629 real 0m44.156s 00:09:58.629 user 1m7.947s 00:09:58.629 sys 0m6.698s 00:09:58.629 11:18:20 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.629 11:18:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.629 ************************************ 00:09:58.629 END TEST blockdev_nvme 00:09:58.629 ************************************ 00:09:58.629 11:18:20 -- spdk/autotest.sh@209 -- # uname -s 00:09:58.629 11:18:20 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:58.629 11:18:20 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:58.629 11:18:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.629 11:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.629 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:09:58.629 ************************************ 00:09:58.629 START TEST blockdev_nvme_gpt 00:09:58.629 ************************************ 00:09:58.629 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:58.888 * Looking for test storage... 00:09:58.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.888 11:18:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.888 --rc genhtml_branch_coverage=1 00:09:58.888 --rc genhtml_function_coverage=1 00:09:58.888 --rc genhtml_legend=1 00:09:58.888 --rc geninfo_all_blocks=1 00:09:58.888 --rc geninfo_unexecuted_blocks=1 00:09:58.888 00:09:58.888 ' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.888 --rc genhtml_branch_coverage=1 00:09:58.888 --rc genhtml_function_coverage=1 00:09:58.888 --rc genhtml_legend=1 00:09:58.888 --rc geninfo_all_blocks=1 00:09:58.888 --rc geninfo_unexecuted_blocks=1 00:09:58.888 00:09:58.888 ' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.888 --rc genhtml_branch_coverage=1 00:09:58.888 --rc genhtml_function_coverage=1 00:09:58.888 --rc genhtml_legend=1 00:09:58.888 --rc geninfo_all_blocks=1 00:09:58.888 --rc geninfo_unexecuted_blocks=1 00:09:58.888 00:09:58.888 ' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.888 --rc genhtml_branch_coverage=1 00:09:58.888 --rc genhtml_function_coverage=1 00:09:58.888 --rc genhtml_legend=1 00:09:58.888 --rc geninfo_all_blocks=1 00:09:58.888 --rc geninfo_unexecuted_blocks=1 00:09:58.888 00:09:58.888 ' 00:09:58.888 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:58.888 11:18:20 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:58.888 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62351 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:58.889 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62351 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62351 ']' 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.889 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.147 [2024-12-10 11:18:21.068254] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:59.147 [2024-12-10 11:18:21.068431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62351 ] 00:09:59.147 [2024-12-10 11:18:21.250577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.405 [2024-12-10 11:18:21.352725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.972 11:18:22 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.972 11:18:22 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:09:59.972 11:18:22 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:59.972 11:18:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:09:59.972 11:18:22 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:00.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:00.538 Waiting for block devices as requested 00:10:00.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.796 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.796 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.796 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.122 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:06.122 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:06.122 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:06.122 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:06.122 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:06.122 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:06.122 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:06.123 BYT; 00:10:06.123 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:06.123 BYT; 00:10:06.123 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:06.123 11:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:06.123 11:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:06.123 11:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:06.123 11:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:06.123 11:18:28 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:06.123 11:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:06.123 11:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:07.059 The operation has completed successfully. 00:10:07.059 11:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:07.994 The operation has completed successfully. 00:10:07.994 11:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:08.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.126 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.126 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.126 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.126 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:09.384 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.384 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.384 [] 00:10:09.384 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:09.384 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:09.384 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.384 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:09.901 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.901 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:09.901 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:09.902 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "40338632-cb65-46f3-b2fe-2dcabf4fc6ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "40338632-cb65-46f3-b2fe-2dcabf4fc6ca",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ac1ea797-cf09-4bf0-92af-73f621ea5d49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ac1ea797-cf09-4bf0-92af-73f621ea5d49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bb6c0fa4-91a0-4de6-bfab-5eccdd7d4a81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb6c0fa4-91a0-4de6-bfab-5eccdd7d4a81",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8aa0716a-0444-460b-afd8-7d3d9a8b68d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8aa0716a-0444-460b-afd8-7d3d9a8b68d9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "389b5024-bac2-41d1-bbb5-529e5861fec0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "389b5024-bac2-41d1-bbb5-529e5861fec0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:09.902 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:09.902 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:09.902 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:09.902 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62351 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62351 ']' 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62351 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62351 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.902 killing process with pid 62351 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62351' 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62351 00:10:09.902 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62351 00:10:11.857 11:18:34 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:12.116 11:18:34 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:12.116 11:18:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:12.116 11:18:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.116 11:18:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:12.116 ************************************ 00:10:12.116 START TEST bdev_hello_world 00:10:12.116 ************************************ 00:10:12.116 11:18:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:12.116 [2024-12-10 11:18:34.136451] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:12.116 [2024-12-10 11:18:34.136670] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:10:12.375 [2024-12-10 11:18:34.317140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.375 [2024-12-10 11:18:34.420493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.941 [2024-12-10 11:18:35.048486] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:12.941 [2024-12-10 11:18:35.048559] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:12.941 [2024-12-10 11:18:35.048601] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:12.941 [2024-12-10 11:18:35.051772] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:12.941 [2024-12-10 11:18:35.052285] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:12.941 [2024-12-10 11:18:35.052325] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:12.941 [2024-12-10 11:18:35.052543] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:12.941 00:10:12.941 [2024-12-10 11:18:35.052603] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:14.317 00:10:14.317 real 0m2.017s 00:10:14.317 user 0m1.683s 00:10:14.317 sys 0m0.223s 00:10:14.317 ************************************ 00:10:14.317 END TEST bdev_hello_world 00:10:14.317 ************************************ 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 11:18:36 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:14.317 11:18:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.317 11:18:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.317 11:18:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 ************************************ 00:10:14.317 START TEST bdev_bounds 00:10:14.317 ************************************ 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63026 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:14.317 Process bdevio pid: 63026 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63026' 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63026 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63026 ']' 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.317 11:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 [2024-12-10 11:18:36.227584] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:14.317 [2024-12-10 11:18:36.228870] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63026 ] 00:10:14.317 [2024-12-10 11:18:36.422412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.575 [2024-12-10 11:18:36.531121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.575 [2024-12-10 11:18:36.531166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.575 [2024-12-10 11:18:36.531171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.141 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.141 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:15.141 11:18:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:15.401 I/O targets: 00:10:15.401 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:15.401 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:15.401 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:15.401 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:15.401 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:15.401 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:15.401 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:15.401 00:10:15.401 00:10:15.401 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.401 http://cunit.sourceforge.net/ 00:10:15.401 00:10:15.401 00:10:15.401 Suite: bdevio tests on: Nvme3n1 00:10:15.401 Test: blockdev write read block ...passed 00:10:15.401 Test: blockdev write zeroes read block ...passed 00:10:15.401 Test: blockdev write zeroes read no split ...passed 00:10:15.401 Test: blockdev write zeroes read split ...passed 00:10:15.401 Test: blockdev write zeroes read split partial ...passed 00:10:15.401 Test: blockdev reset ...[2024-12-10 11:18:37.391382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:15.401 [2024-12-10 11:18:37.395344] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:15.401 passed 00:10:15.401 Test: blockdev write read 8 blocks ...passed 00:10:15.401 Test: blockdev write read size > 128k ...passed 00:10:15.401 Test: blockdev write read invalid size ...passed 00:10:15.401 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.401 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.401 Test: blockdev write read max offset ...passed 00:10:15.401 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.401 Test: blockdev writev readv 8 blocks ...passed 00:10:15.401 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.401 Test: blockdev writev readv block ...passed 00:10:15.401 Test: blockdev writev readv size > 128k ...passed 00:10:15.401 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.401 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.405102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bba04000 len:0x1000 00:10:15.401 [2024-12-10 11:18:37.405170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.401 passed 00:10:15.401 Test: blockdev nvme passthru rw ...passed 00:10:15.401 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.401 Test: blockdev nvme admin passthru ...[2024-12-10 11:18:37.405946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:15.401 [2024-12-10 11:18:37.406000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:15.401 passed 00:10:15.401 Test: blockdev copy ...passed 00:10:15.401 Suite: bdevio tests on: Nvme2n3 00:10:15.401 Test: blockdev write read block ...passed 00:10:15.401 Test: blockdev write zeroes read block ...passed 00:10:15.401 Test: blockdev write zeroes read no split ...passed 00:10:15.402 Test: blockdev write zeroes read split ...passed 00:10:15.402 Test: blockdev write zeroes read split partial ...passed 00:10:15.402 Test: blockdev reset ...[2024-12-10 11:18:37.471161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:15.402 passed 00:10:15.402 Test: blockdev write read 8 blocks ...[2024-12-10 11:18:37.475493] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:15.402 passed 00:10:15.402 Test: blockdev write read size > 128k ...passed 00:10:15.402 Test: blockdev write read invalid size ...passed 00:10:15.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.402 Test: blockdev write read max offset ...passed 00:10:15.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.402 Test: blockdev writev readv 8 blocks ...passed 00:10:15.402 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.402 Test: blockdev writev readv block ...passed 00:10:15.402 Test: blockdev writev readv size > 128k ...passed 00:10:15.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.402 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:10:15.402 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bba02000 len:0x1000 00:10:15.402 [2024-12-10 11:18:37.484062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.402 passed 00:10:15.402 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.402 Test: blockdev nvme admin passthru ...[2024-12-10 11:18:37.484824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:15.402 [2024-12-10 11:18:37.484879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:15.402 passed 00:10:15.402 Test: blockdev copy ...passed 00:10:15.402 Suite: bdevio tests on: Nvme2n2 00:10:15.402 Test: blockdev write read block ...passed 00:10:15.402 Test: blockdev write zeroes read block ...passed 00:10:15.402 Test: blockdev write zeroes read no split ...passed 00:10:15.402 Test: blockdev write zeroes read split ...passed 00:10:15.402 Test: blockdev write zeroes read split partial ...passed 00:10:15.402 Test: blockdev reset ...[2024-12-10 11:18:37.562346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:15.670 passed 00:10:15.670 Test: blockdev write read 8 blocks ...[2024-12-10 11:18:37.566876] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:15.670 passed 00:10:15.670 Test: blockdev write read size > 128k ...passed 00:10:15.670 Test: blockdev write read invalid size ...passed 00:10:15.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.670 Test: blockdev write read max offset ...passed 00:10:15.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.670 Test: blockdev writev readv 8 blocks ...passed 00:10:15.670 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.670 Test: blockdev writev readv block ...passed 00:10:15.670 Test: blockdev writev readv size > 128k ...passed 00:10:15.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.670 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.575458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:10:15.670 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cf838000 len:0x1000 00:10:15.670 [2024-12-10 11:18:37.575661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.670 passed 00:10:15.670 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.670 Test: blockdev nvme admin passthru ...[2024-12-10 11:18:37.576533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:15.670 [2024-12-10 11:18:37.576585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:15.670 passed 00:10:15.670 Test: blockdev copy ...passed 00:10:15.670 Suite: bdevio tests on: Nvme2n1 00:10:15.670 Test: blockdev write read block ...passed 00:10:15.670 Test: blockdev write zeroes read block ...passed 00:10:15.670 Test: blockdev write zeroes read no split ...passed 00:10:15.670 Test: blockdev write zeroes read split ...passed 00:10:15.670 Test: blockdev write zeroes read split partial ...passed 00:10:15.670 Test: blockdev reset ...[2024-12-10 11:18:37.650839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:15.670 [2024-12-10 11:18:37.655237] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:15.670 passed 00:10:15.670 Test: blockdev write read 8 blocks ...passed 00:10:15.670 Test: blockdev write read size > 128k ...passed 00:10:15.670 Test: blockdev write read invalid size ...passed 00:10:15.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.670 Test: blockdev write read max offset ...passed 00:10:15.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.671 Test: blockdev writev readv 8 blocks ...passed 00:10:15.671 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.671 Test: blockdev writev readv block ...passed 00:10:15.671 Test: blockdev writev readv size > 128k ...passed 00:10:15.671 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.671 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.664291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf834000 len:0x1000 00:10:15.671 [2024-12-10 11:18:37.664359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.671 passed 00:10:15.671 Test: blockdev nvme passthru rw ...passed 00:10:15.671 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:18:37.665375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:15.671 passed 00:10:15.671 Test: blockdev nvme admin passthru ...[2024-12-10 11:18:37.665424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:15.671 passed 00:10:15.671 Test: blockdev copy ...passed 00:10:15.671 Suite: bdevio tests on: Nvme1n1p2 00:10:15.671 Test: blockdev write read block ...passed 00:10:15.671 Test: blockdev write zeroes read block ...passed 00:10:15.671 Test: blockdev write zeroes read no split ...passed 00:10:15.671 Test: blockdev write zeroes read split ...passed 00:10:15.671 Test: blockdev write zeroes read split partial ...passed 00:10:15.671 Test: blockdev reset ...[2024-12-10 11:18:37.743013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:15.671 passed 00:10:15.671 Test: blockdev write read 8 blocks ...[2024-12-10 11:18:37.746567] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:15.671 passed 00:10:15.671 Test: blockdev write read size > 128k ...passed 00:10:15.671 Test: blockdev write read invalid size ...passed 00:10:15.671 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.671 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.671 Test: blockdev write read max offset ...passed 00:10:15.671 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.671 Test: blockdev writev readv 8 blocks ...passed 00:10:15.671 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.671 Test: blockdev writev readv block ...passed 00:10:15.671 Test: blockdev writev readv size > 128k ...passed 00:10:15.671 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.671 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.754907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cf830000 len:0x1000 00:10:15.671 [2024-12-10 11:18:37.754973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.671 passed 00:10:15.671 Test: blockdev nvme passthru rw ...passed 00:10:15.671 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.671 Test: blockdev nvme admin passthru ...passed 00:10:15.671 Test: blockdev copy ...passed 00:10:15.671 Suite: bdevio tests on: Nvme1n1p1 00:10:15.671 Test: blockdev write read block ...passed 00:10:15.671 Test: blockdev write zeroes read block ...passed 00:10:15.671 Test: blockdev write zeroes read no split ...passed 00:10:15.671 Test: blockdev write zeroes read split ...passed 00:10:15.671 Test: blockdev write zeroes read split partial ...passed 00:10:15.671 Test: blockdev reset ...[2024-12-10 11:18:37.825806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:15.671 [2024-12-10 11:18:37.829605] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:15.671 passed 00:10:15.671 Test: blockdev write read 8 blocks ...passed 00:10:15.671 Test: blockdev write read size > 128k ...passed 00:10:15.671 Test: blockdev write read invalid size ...passed 00:10:15.671 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.671 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.671 Test: blockdev write read max offset ...passed 00:10:15.671 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.671 Test: blockdev writev readv 8 blocks ...passed 00:10:15.930 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.930 Test: blockdev writev readv block ...passed 00:10:15.930 Test: blockdev writev readv size > 128k ...passed 00:10:15.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.930 Test: blockdev comparev and writev ...[2024-12-10 11:18:37.839247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bbc0e000 len:0x1000 00:10:15.930 [2024-12-10 11:18:37.839313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:15.930 passed 00:10:15.930 Test: blockdev nvme passthru rw ...passed 00:10:15.930 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.930 Test: blockdev nvme admin passthru ...passed 00:10:15.930 Test: blockdev copy ...passed 00:10:15.930 Suite: bdevio tests on: Nvme0n1 00:10:15.930 Test: blockdev write read block ...passed 00:10:15.930 Test: blockdev write zeroes read block ...passed 00:10:15.930 Test: blockdev write zeroes read no split ...passed 00:10:15.930 Test: blockdev write zeroes read split ...passed 00:10:15.930 Test: blockdev write zeroes read split partial ...passed 00:10:15.930 Test: blockdev reset ...[2024-12-10 11:18:37.908353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:15.930 [2024-12-10 11:18:37.912782] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:15.930 passed 00:10:15.930 Test: blockdev write read 8 blocks ...passed 00:10:15.930 Test: blockdev write read size > 128k ...passed 00:10:15.930 Test: blockdev write read invalid size ...passed 00:10:15.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.930 Test: blockdev write read max offset ...passed 00:10:15.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.930 Test: blockdev writev readv 8 blocks ...passed 00:10:15.930 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.930 Test: blockdev writev readv block ...passed 00:10:15.930 Test: blockdev writev readv size > 128k ...passed 00:10:15.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.930 Test: blockdev comparev and writev ...passed 00:10:15.930 Test: blockdev nvme passthru rw ...[2024-12-10 11:18:37.920699] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:15.930 separate metadata which is not supported yet. 00:10:15.930 passed 00:10:15.930 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.930 Test: blockdev nvme admin passthru ...[2024-12-10 11:18:37.921230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:15.930 [2024-12-10 11:18:37.921298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:15.930 passed 00:10:15.930 Test: blockdev copy ...passed 00:10:15.930 00:10:15.930 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.930 suites 7 7 n/a 0 0 00:10:15.930 tests 161 161 161 0 0 00:10:15.930 asserts 1025 1025 1025 0 n/a 00:10:15.930 00:10:15.930 Elapsed time = 1.623 seconds 00:10:15.930 0 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63026 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63026 ']' 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63026 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63026 00:10:15.930 killing process with pid 63026 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63026' 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63026 00:10:15.930 11:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63026 00:10:16.865 11:18:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:16.865 00:10:16.865 real 0m2.799s 00:10:16.865 user 0m7.220s 00:10:16.865 sys 0m0.371s 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:16.866 ************************************ 00:10:16.866 END TEST bdev_bounds 00:10:16.866 ************************************ 00:10:16.866 11:18:38 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:16.866 11:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.866 11:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.866 11:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:16.866 ************************************ 00:10:16.866 START TEST bdev_nbd 00:10:16.866 ************************************ 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63090 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63090 /var/tmp/spdk-nbd.sock 00:10:16.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63090 ']' 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.866 11:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:17.125 [2024-12-10 11:18:39.057078] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:17.125 [2024-12-10 11:18:39.057253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.125 [2024-12-10 11:18:39.241582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.383 [2024-12-10 11:18:39.355541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.949 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.949 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:17.949 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:17.949 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:17.950 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.208 1+0 records in 00:10:18.208 1+0 records out 00:10:18.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728873 s, 5.6 MB/s 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.208 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.774 1+0 records in 00:10:18.774 1+0 records out 00:10:18.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587984 s, 7.0 MB/s 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.774 11:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.033 1+0 records in 00:10:19.033 1+0 records out 00:10:19.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721226 s, 5.7 MB/s 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:19.033 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.291 1+0 records in 00:10:19.291 1+0 records out 00:10:19.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447188 s, 9.2 MB/s 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:19.291 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.549 1+0 records in 00:10:19.549 1+0 records out 00:10:19.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720436 s, 5.7 MB/s 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:19.549 11:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:20.156 1+0 records in 00:10:20.156 1+0 records out 00:10:20.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657485 s, 6.2 MB/s 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.156 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:20.416 1+0 records in 00:10:20.416 1+0 records out 00:10:20.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717865 s, 5.7 MB/s 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.416 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd0", 00:10:20.674 "bdev_name": "Nvme0n1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd1", 00:10:20.674 "bdev_name": "Nvme1n1p1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd2", 00:10:20.674 "bdev_name": "Nvme1n1p2" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd3", 00:10:20.674 "bdev_name": "Nvme2n1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd4", 00:10:20.674 "bdev_name": "Nvme2n2" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd5", 00:10:20.674 "bdev_name": "Nvme2n3" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd6", 00:10:20.674 "bdev_name": "Nvme3n1" 00:10:20.674 } 00:10:20.674 ]' 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd0", 00:10:20.674 "bdev_name": "Nvme0n1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd1", 00:10:20.674 "bdev_name": "Nvme1n1p1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd2", 00:10:20.674 "bdev_name": "Nvme1n1p2" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd3", 00:10:20.674 "bdev_name": "Nvme2n1" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd4", 00:10:20.674 "bdev_name": "Nvme2n2" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd5", 00:10:20.674 "bdev_name": "Nvme2n3" 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "nbd_device": "/dev/nbd6", 00:10:20.674 "bdev_name": "Nvme3n1" 00:10:20.674 } 00:10:20.674 ]' 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.674 11:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.241 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.807 11:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.374 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.632 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.891 11:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.148 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:23.406 /dev/nbd0 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.406 1+0 records in 00:10:23.406 1+0 records out 00:10:23.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400085 s, 10.2 MB/s 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.406 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:23.973 /dev/nbd1 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.973 1+0 records in 00:10:23.973 1+0 records out 00:10:23.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517484 s, 7.9 MB/s 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.973 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.974 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.974 11:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.974 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.974 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.974 11:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:24.232 /dev/nbd10 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.232 1+0 records in 00:10:24.232 1+0 records out 00:10:24.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570653 s, 7.2 MB/s 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.232 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.233 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:24.491 /dev/nbd11 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.491 1+0 records in 00:10:24.491 1+0 records out 00:10:24.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511982 s, 8.0 MB/s 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.491 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:24.749 /dev/nbd12 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.749 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.007 1+0 records in 00:10:25.007 1+0 records out 00:10:25.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610442 s, 6.7 MB/s 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.007 11:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:25.265 /dev/nbd13 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.265 1+0 records in 00:10:25.265 1+0 records out 00:10:25.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683706 s, 6.0 MB/s 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.265 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:25.524 /dev/nbd14 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.524 1+0 records in 00:10:25.524 1+0 records out 00:10:25.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662253 s, 6.2 MB/s 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.524 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:25.782 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd0", 00:10:25.782 "bdev_name": "Nvme0n1" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd1", 00:10:25.782 "bdev_name": "Nvme1n1p1" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd10", 00:10:25.782 "bdev_name": "Nvme1n1p2" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd11", 00:10:25.782 "bdev_name": "Nvme2n1" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd12", 00:10:25.782 "bdev_name": "Nvme2n2" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd13", 00:10:25.782 "bdev_name": "Nvme2n3" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd14", 00:10:25.782 "bdev_name": "Nvme3n1" 00:10:25.782 } 00:10:25.782 ]' 00:10:25.782 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.782 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd0", 00:10:25.782 "bdev_name": "Nvme0n1" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd1", 00:10:25.782 "bdev_name": "Nvme1n1p1" 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "nbd_device": "/dev/nbd10", 00:10:25.782 "bdev_name": "Nvme1n1p2" 00:10:25.782 }, 00:10:25.783 { 00:10:25.783 "nbd_device": "/dev/nbd11", 00:10:25.783 "bdev_name": "Nvme2n1" 00:10:25.783 }, 00:10:25.783 { 00:10:25.783 "nbd_device": "/dev/nbd12", 00:10:25.783 "bdev_name": "Nvme2n2" 00:10:25.783 }, 00:10:25.783 { 00:10:25.783 "nbd_device": "/dev/nbd13", 00:10:25.783 "bdev_name": "Nvme2n3" 00:10:25.783 }, 00:10:25.783 { 00:10:25.783 "nbd_device": "/dev/nbd14", 00:10:25.783 "bdev_name": "Nvme3n1" 00:10:25.783 } 00:10:25.783 ]' 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:26.040 /dev/nbd1 00:10:26.040 /dev/nbd10 00:10:26.040 /dev/nbd11 00:10:26.040 /dev/nbd12 00:10:26.040 /dev/nbd13 00:10:26.040 /dev/nbd14' 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:26.040 /dev/nbd1 00:10:26.040 /dev/nbd10 00:10:26.040 /dev/nbd11 00:10:26.040 /dev/nbd12 00:10:26.040 /dev/nbd13 00:10:26.040 /dev/nbd14' 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:26.040 256+0 records in 00:10:26.040 256+0 records out 00:10:26.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720505 s, 146 MB/s 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.040 11:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:26.040 256+0 records in 00:10:26.040 256+0 records out 00:10:26.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144659 s, 7.2 MB/s 00:10:26.040 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.040 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:26.298 256+0 records in 00:10:26.298 256+0 records out 00:10:26.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154912 s, 6.8 MB/s 00:10:26.298 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.298 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:26.298 256+0 records in 00:10:26.298 256+0 records out 00:10:26.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148263 s, 7.1 MB/s 00:10:26.298 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.298 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:26.557 256+0 records in 00:10:26.557 256+0 records out 00:10:26.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143025 s, 7.3 MB/s 00:10:26.557 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.557 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:26.816 256+0 records in 00:10:26.816 256+0 records out 00:10:26.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147887 s, 7.1 MB/s 00:10:26.816 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.816 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:26.816 256+0 records in 00:10:26.816 256+0 records out 00:10:26.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15151 s, 6.9 MB/s 00:10:26.816 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.816 11:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:27.074 256+0 records in 00:10:27.074 256+0 records out 00:10:27.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140348 s, 7.5 MB/s 00:10:27.074 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:27.074 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.075 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:27.333 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.334 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.334 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.334 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.592 11:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.158 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.417 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.676 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.934 11:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.192 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.450 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:29.450 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:29.450 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:29.708 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:29.965 malloc_lvol_verify 00:10:29.965 11:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:30.283 61bf9e23-8adb-4548-a0a6-da96c482d3f7 00:10:30.283 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:30.540 e63ec1bc-52b5-41ed-a990-1b9d9a77d8c7 00:10:30.540 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:30.799 /dev/nbd0 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:30.799 mke2fs 1.47.0 (5-Feb-2023) 00:10:30.799 Discarding device blocks: 0/4096 done 00:10:30.799 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:30.799 00:10:30.799 Allocating group tables: 0/1 done 00:10:30.799 Writing inode tables: 0/1 done 00:10:30.799 Creating journal (1024 blocks): done 00:10:30.799 Writing superblocks and filesystem accounting information: 0/1 done 00:10:30.799 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.799 11:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:31.056 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63090 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63090 ']' 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63090 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63090 00:10:31.057 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.315 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.315 killing process with pid 63090 00:10:31.315 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63090' 00:10:31.315 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63090 00:10:31.315 11:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63090 00:10:32.249 11:18:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:32.249 00:10:32.249 real 0m15.348s 00:10:32.249 user 0m22.569s 00:10:32.249 sys 0m4.638s 00:10:32.249 11:18:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.249 11:18:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 ************************************ 00:10:32.249 END TEST bdev_nbd 00:10:32.249 ************************************ 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:10:32.249 skipping fio tests on NVMe due to multi-ns failures. 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:32.249 11:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:32.249 11:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:32.249 11:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.249 11:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 ************************************ 00:10:32.249 START TEST bdev_verify 00:10:32.249 ************************************ 00:10:32.249 11:18:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:32.507 [2024-12-10 11:18:54.433936] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:32.507 [2024-12-10 11:18:54.434083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63550 ] 00:10:32.507 [2024-12-10 11:18:54.610893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.765 [2024-12-10 11:18:54.747509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.765 [2024-12-10 11:18:54.747511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.333 Running I/O for 5 seconds... 00:10:35.640 19072.00 IOPS, 74.50 MiB/s [2024-12-10T11:18:58.770Z] 18240.00 IOPS, 71.25 MiB/s [2024-12-10T11:19:00.146Z] 18240.00 IOPS, 71.25 MiB/s [2024-12-10T11:19:00.714Z] 18160.00 IOPS, 70.94 MiB/s [2024-12-10T11:19:00.714Z] 17920.00 IOPS, 70.00 MiB/s 00:10:38.547 Latency(us) 00:10:38.548 [2024-12-10T11:19:00.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.548 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0xbd0bd 00:10:38.548 Nvme0n1 : 5.08 1311.52 5.12 0.00 0.00 97357.55 20852.36 99614.72 00:10:38.548 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:38.548 Nvme0n1 : 5.08 1235.14 4.82 0.00 0.00 103382.20 19541.64 89605.59 00:10:38.548 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x4ff80 00:10:38.548 Nvme1n1p1 : 5.08 1311.02 5.12 0.00 0.00 97198.43 18588.39 94848.47 00:10:38.548 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:38.548 Nvme1n1p1 : 5.08 1234.62 4.82 0.00 0.00 103207.73 17396.83 85792.58 00:10:38.548 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x4ff7f 00:10:38.548 Nvme1n1p2 : 5.08 1310.60 5.12 0.00 0.00 97070.50 18826.71 89605.59 00:10:38.548 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:38.548 Nvme1n1p2 : 5.08 1234.23 4.82 0.00 0.00 103039.01 17039.36 82932.83 00:10:38.548 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x80000 00:10:38.548 Nvme2n1 : 5.08 1309.62 5.12 0.00 0.00 96918.48 21448.15 85315.96 00:10:38.548 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x80000 length 0x80000 00:10:38.548 Nvme2n1 : 5.09 1233.32 4.82 0.00 0.00 102884.72 19065.02 84362.71 00:10:38.548 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x80000 00:10:38.548 Nvme2n2 : 5.08 1309.14 5.11 0.00 0.00 96761.12 21567.30 89128.96 00:10:38.548 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x80000 length 0x80000 00:10:38.548 Nvme2n2 : 5.09 1232.67 4.82 0.00 0.00 102724.49 20137.43 86269.21 00:10:38.548 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x80000 00:10:38.548 Nvme2n3 : 5.09 1308.73 5.11 0.00 0.00 96595.08 20614.05 94371.84 00:10:38.548 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x80000 length 0x80000 00:10:38.548 Nvme2n3 : 5.09 1232.03 4.81 0.00 0.00 102548.32 20256.58 87699.08 00:10:38.548 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x0 length 0x20000 00:10:38.548 Nvme3n1 : 5.09 1308.04 5.11 0.00 0.00 96446.07 14060.45 99138.09 00:10:38.548 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.548 Verification LBA range: start 0x20000 length 0x20000 00:10:38.548 Nvme3n1 : 5.09 1231.68 4.81 0.00 0.00 102381.74 14060.45 90082.21 00:10:38.548 [2024-12-10T11:19:00.715Z] =================================================================================================================== 00:10:38.548 [2024-12-10T11:19:00.715Z] Total : 17802.36 69.54 0.00 0.00 99805.23 14060.45 99614.72 00:10:39.926 00:10:39.926 real 0m7.528s 00:10:39.926 user 0m13.828s 00:10:39.926 sys 0m0.252s 00:10:39.926 11:19:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.926 11:19:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 ************************************ 00:10:39.926 END TEST bdev_verify 00:10:39.926 ************************************ 00:10:39.926 11:19:01 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:39.926 11:19:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:39.926 11:19:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.926 11:19:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 ************************************ 00:10:39.926 START TEST bdev_verify_big_io 00:10:39.926 ************************************ 00:10:39.926 11:19:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:39.926 [2024-12-10 11:19:02.029875] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:39.926 [2024-12-10 11:19:02.030041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63648 ] 00:10:40.185 [2024-12-10 11:19:02.216423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:40.185 [2024-12-10 11:19:02.343709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.185 [2024-12-10 11:19:02.343711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.120 Running I/O for 5 seconds... 00:10:45.033 799.00 IOPS, 49.94 MiB/s [2024-12-10T11:19:09.732Z] 1245.00 IOPS, 77.81 MiB/s [2024-12-10T11:19:09.732Z] 2561.00 IOPS, 160.06 MiB/s 00:10:47.565 Latency(us) 00:10:47.565 [2024-12-10T11:19:09.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.565 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.565 Verification LBA range: start 0x0 length 0xbd0b 00:10:47.565 Nvme0n1 : 5.85 103.98 6.50 0.00 0.00 1172611.99 26929.34 1174405.12 00:10:47.566 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:47.566 Nvme0n1 : 5.89 114.09 7.13 0.00 0.00 1064834.19 21209.83 1174405.12 00:10:47.566 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x4ff8 00:10:47.566 Nvme1n1p1 : 5.85 105.32 6.58 0.00 0.00 1129459.18 98661.47 1006632.96 00:10:47.566 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:47.566 Nvme1n1p1 : 5.81 115.50 7.22 0.00 0.00 1031480.35 96754.97 999006.95 00:10:47.566 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x4ff7 00:10:47.566 Nvme1n1p2 : 5.85 109.38 6.84 0.00 0.00 1071914.26 107717.35 1029510.98 00:10:47.566 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:47.566 Nvme1n1p2 : 5.89 119.45 7.47 0.00 0.00 979233.77 80549.70 835047.80 00:10:47.566 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x8000 00:10:47.566 Nvme2n1 : 5.85 109.33 6.83 0.00 0.00 1040472.99 106764.10 1060015.01 00:10:47.566 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x8000 length 0x8000 00:10:47.566 Nvme2n1 : 6.03 122.97 7.69 0.00 0.00 923110.62 88652.33 850299.81 00:10:47.566 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x8000 00:10:47.566 Nvme2n2 : 5.97 117.93 7.37 0.00 0.00 942750.76 46232.67 1082893.03 00:10:47.566 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x8000 length 0x8000 00:10:47.566 Nvme2n2 : 6.03 123.27 7.70 0.00 0.00 893404.91 87699.08 876990.84 00:10:47.566 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x8000 00:10:47.566 Nvme2n3 : 6.03 123.15 7.70 0.00 0.00 874375.73 28240.06 1113397.06 00:10:47.566 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x8000 length 0x8000 00:10:47.566 Nvme2n3 : 6.06 130.78 8.17 0.00 0.00 826235.42 19660.80 1128649.08 00:10:47.566 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x0 length 0x2000 00:10:47.566 Nvme3n1 : 6.05 135.96 8.50 0.00 0.00 773766.47 1608.61 1365055.30 00:10:47.566 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:47.566 Verification LBA range: start 0x2000 length 0x2000 00:10:47.566 Nvme3n1 : 6.07 101.88 6.37 0.00 0.00 1029982.50 6285.50 2135282.04 00:10:47.566 [2024-12-10T11:19:09.733Z] =================================================================================================================== 00:10:47.566 [2024-12-10T11:19:09.733Z] Total : 1632.98 102.06 0.00 0.00 972392.57 1608.61 2135282.04 00:10:48.941 00:10:48.941 real 0m9.032s 00:10:48.941 user 0m16.867s 00:10:48.941 sys 0m0.293s 00:10:48.941 11:19:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.941 11:19:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.941 ************************************ 00:10:48.941 END TEST bdev_verify_big_io 00:10:48.941 ************************************ 00:10:48.941 11:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.941 11:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:48.941 11:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.941 11:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:48.941 ************************************ 00:10:48.941 START TEST bdev_write_zeroes 00:10:48.941 ************************************ 00:10:48.941 11:19:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:49.199 [2024-12-10 11:19:11.146288] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:49.199 [2024-12-10 11:19:11.146476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63767 ] 00:10:49.199 [2024-12-10 11:19:11.326833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.458 [2024-12-10 11:19:11.444665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.025 Running I/O for 1 seconds... 00:10:51.399 43392.00 IOPS, 169.50 MiB/s 00:10:51.400 Latency(us) 00:10:51.400 [2024-12-10T11:19:13.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.400 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme0n1 : 1.04 6160.43 24.06 0.00 0.00 20707.32 14000.87 37891.72 00:10:51.400 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme1n1p1 : 1.03 6123.48 23.92 0.00 0.00 20661.64 13881.72 39798.23 00:10:51.400 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme1n1p2 : 1.04 6150.32 24.02 0.00 0.00 20667.61 14954.12 39321.60 00:10:51.400 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme2n1 : 1.04 6141.12 23.99 0.00 0.00 20580.36 11021.96 39083.29 00:10:51.400 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme2n2 : 1.04 6132.04 23.95 0.00 0.00 20568.48 10604.92 38606.66 00:10:51.400 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme2n3 : 1.05 6122.97 23.92 0.00 0.00 20551.01 10009.13 38368.35 00:10:51.400 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:51.400 Nvme3n1 : 1.05 6052.77 23.64 0.00 0.00 20757.22 13643.40 37891.72 00:10:51.400 [2024-12-10T11:19:13.567Z] =================================================================================================================== 00:10:51.400 [2024-12-10T11:19:13.567Z] Total : 42883.13 167.51 0.00 0.00 20641.76 10009.13 39798.23 00:10:52.335 ************************************ 00:10:52.335 END TEST bdev_write_zeroes 00:10:52.335 ************************************ 00:10:52.335 00:10:52.335 real 0m3.291s 00:10:52.335 user 0m2.918s 00:10:52.335 sys 0m0.244s 00:10:52.335 11:19:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.335 11:19:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:52.335 11:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.335 11:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:52.335 11:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.335 11:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.335 ************************************ 00:10:52.335 START TEST bdev_json_nonenclosed 00:10:52.335 ************************************ 00:10:52.335 11:19:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.335 [2024-12-10 11:19:14.459472] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:52.335 [2024-12-10 11:19:14.459869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63826 ] 00:10:52.594 [2024-12-10 11:19:14.637435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.853 [2024-12-10 11:19:14.762108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.853 [2024-12-10 11:19:14.762238] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:52.853 [2024-12-10 11:19:14.762271] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:52.853 [2024-12-10 11:19:14.762287] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.112 ************************************ 00:10:53.112 END TEST bdev_json_nonenclosed 00:10:53.112 ************************************ 00:10:53.112 00:10:53.112 real 0m0.698s 00:10:53.112 user 0m0.472s 00:10:53.112 sys 0m0.120s 00:10:53.112 11:19:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.112 11:19:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:53.112 11:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:53.112 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:53.112 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.112 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:53.112 ************************************ 00:10:53.112 START TEST bdev_json_nonarray 00:10:53.112 ************************************ 00:10:53.112 11:19:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:53.112 [2024-12-10 11:19:15.183273] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:53.112 [2024-12-10 11:19:15.183425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63847 ] 00:10:53.370 [2024-12-10 11:19:15.356330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.370 [2024-12-10 11:19:15.463399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.370 [2024-12-10 11:19:15.463521] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:53.370 [2024-12-10 11:19:15.463549] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:53.370 [2024-12-10 11:19:15.463563] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.628 00:10:53.628 real 0m0.642s 00:10:53.628 user 0m0.422s 00:10:53.628 sys 0m0.114s 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.628 ************************************ 00:10:53.628 END TEST bdev_json_nonarray 00:10:53.628 ************************************ 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:53.628 11:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:10:53.628 11:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:10:53.628 11:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:53.628 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.628 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.628 11:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:53.628 ************************************ 00:10:53.628 START TEST bdev_gpt_uuid 00:10:53.628 ************************************ 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:10:53.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63878 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63878 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63878 ']' 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.628 11:19:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 [2024-12-10 11:19:15.930543] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:53.887 [2024-12-10 11:19:15.930746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63878 ] 00:10:54.145 [2024-12-10 11:19:16.116568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.145 [2024-12-10 11:19:16.241945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.080 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.080 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:10:55.080 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:55.080 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.080 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:55.338 Some configs were skipped because the RPC state that can call them passed over. 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:10:55.339 { 00:10:55.339 "name": "Nvme1n1p1", 00:10:55.339 "aliases": [ 00:10:55.339 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:55.339 ], 00:10:55.339 "product_name": "GPT Disk", 00:10:55.339 "block_size": 4096, 00:10:55.339 "num_blocks": 655104, 00:10:55.339 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:55.339 "assigned_rate_limits": { 00:10:55.339 "rw_ios_per_sec": 0, 00:10:55.339 "rw_mbytes_per_sec": 0, 00:10:55.339 "r_mbytes_per_sec": 0, 00:10:55.339 "w_mbytes_per_sec": 0 00:10:55.339 }, 00:10:55.339 "claimed": false, 00:10:55.339 "zoned": false, 00:10:55.339 "supported_io_types": { 00:10:55.339 "read": true, 00:10:55.339 "write": true, 00:10:55.339 "unmap": true, 00:10:55.339 "flush": true, 00:10:55.339 "reset": true, 00:10:55.339 "nvme_admin": false, 00:10:55.339 "nvme_io": false, 00:10:55.339 "nvme_io_md": false, 00:10:55.339 "write_zeroes": true, 00:10:55.339 "zcopy": false, 00:10:55.339 "get_zone_info": false, 00:10:55.339 "zone_management": false, 00:10:55.339 "zone_append": false, 00:10:55.339 "compare": true, 00:10:55.339 "compare_and_write": false, 00:10:55.339 "abort": true, 00:10:55.339 "seek_hole": false, 00:10:55.339 "seek_data": false, 00:10:55.339 "copy": true, 00:10:55.339 "nvme_iov_md": false 00:10:55.339 }, 00:10:55.339 "driver_specific": { 00:10:55.339 "gpt": { 00:10:55.339 "base_bdev": "Nvme1n1", 00:10:55.339 "offset_blocks": 256, 00:10:55.339 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:55.339 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:55.339 "partition_name": "SPDK_TEST_first" 00:10:55.339 } 00:10:55.339 } 00:10:55.339 } 00:10:55.339 ]' 00:10:55.339 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:10:55.597 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:10:55.598 { 00:10:55.598 "name": "Nvme1n1p2", 00:10:55.598 "aliases": [ 00:10:55.598 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:55.598 ], 00:10:55.598 "product_name": "GPT Disk", 00:10:55.598 "block_size": 4096, 00:10:55.598 "num_blocks": 655103, 00:10:55.598 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:55.598 "assigned_rate_limits": { 00:10:55.598 "rw_ios_per_sec": 0, 00:10:55.598 "rw_mbytes_per_sec": 0, 00:10:55.598 "r_mbytes_per_sec": 0, 00:10:55.598 "w_mbytes_per_sec": 0 00:10:55.598 }, 00:10:55.598 "claimed": false, 00:10:55.598 "zoned": false, 00:10:55.598 "supported_io_types": { 00:10:55.598 "read": true, 00:10:55.598 "write": true, 00:10:55.598 "unmap": true, 00:10:55.598 "flush": true, 00:10:55.598 "reset": true, 00:10:55.598 "nvme_admin": false, 00:10:55.598 "nvme_io": false, 00:10:55.598 "nvme_io_md": false, 00:10:55.598 "write_zeroes": true, 00:10:55.598 "zcopy": false, 00:10:55.598 "get_zone_info": false, 00:10:55.598 "zone_management": false, 00:10:55.598 "zone_append": false, 00:10:55.598 "compare": true, 00:10:55.598 "compare_and_write": false, 00:10:55.598 "abort": true, 00:10:55.598 "seek_hole": false, 00:10:55.598 "seek_data": false, 00:10:55.598 "copy": true, 00:10:55.598 "nvme_iov_md": false 00:10:55.598 }, 00:10:55.598 "driver_specific": { 00:10:55.598 "gpt": { 00:10:55.598 "base_bdev": "Nvme1n1", 00:10:55.598 "offset_blocks": 655360, 00:10:55.598 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:55.598 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:55.598 "partition_name": "SPDK_TEST_second" 00:10:55.598 } 00:10:55.598 } 00:10:55.598 } 00:10:55.598 ]' 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:10:55.598 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63878 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63878 ']' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63878 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63878 00:10:55.857 killing process with pid 63878 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63878' 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63878 00:10:55.857 11:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63878 00:10:57.781 00:10:57.781 real 0m4.152s 00:10:57.781 user 0m4.459s 00:10:57.781 sys 0m0.484s 00:10:57.781 11:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.781 11:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:57.781 ************************************ 00:10:57.781 END TEST bdev_gpt_uuid 00:10:57.781 ************************************ 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:58.039 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:58.040 11:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:58.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.556 Waiting for block devices as requested 00:10:58.556 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.556 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.556 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.814 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.107 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:04.107 11:19:25 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:04.107 11:19:25 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:04.107 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:04.107 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:04.107 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:04.107 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:04.107 11:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:04.107 00:11:04.107 real 1m5.384s 00:11:04.107 user 1m25.124s 00:11:04.107 sys 0m9.709s 00:11:04.107 11:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.107 11:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 ************************************ 00:11:04.107 END TEST blockdev_nvme_gpt 00:11:04.107 ************************************ 00:11:04.107 11:19:26 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:04.107 11:19:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.107 11:19:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.107 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 ************************************ 00:11:04.107 START TEST nvme 00:11:04.107 ************************************ 00:11:04.107 11:19:26 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:04.107 * Looking for test storage... 00:11:04.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.107 11:19:26 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.366 11:19:26 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.366 11:19:26 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.366 11:19:26 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.366 11:19:26 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.366 11:19:26 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.366 11:19:26 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:04.366 11:19:26 nvme -- scripts/common.sh@345 -- # : 1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.366 11:19:26 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.366 11:19:26 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@353 -- # local d=1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.366 11:19:26 nvme -- scripts/common.sh@355 -- # echo 1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.366 11:19:26 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@353 -- # local d=2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.366 11:19:26 nvme -- scripts/common.sh@355 -- # echo 2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.366 11:19:26 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.366 11:19:26 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.366 11:19:26 nvme -- scripts/common.sh@368 -- # return 0 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 11:19:26 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 11:19:26 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.500 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.500 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.500 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.500 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.500 11:19:27 nvme -- nvme/nvme.sh@79 -- # uname 00:11:05.500 11:19:27 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:05.500 11:19:27 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:05.501 11:19:27 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:05.501 Waiting for stub to ready for secondary processes... 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1075 -- # stubpid=64536 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64536 ]] 00:11:05.501 11:19:27 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:05.759 [2024-12-10 11:19:27.703613] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:11:05.759 [2024-12-10 11:19:27.703856] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:06.702 [2024-12-10 11:19:28.525623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.702 [2024-12-10 11:19:28.633180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.702 [2024-12-10 11:19:28.633298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.702 [2024-12-10 11:19:28.633316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.702 11:19:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:06.702 11:19:28 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64536 ]] 00:11:06.702 11:19:28 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:06.702 [2024-12-10 11:19:28.652609] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:06.702 [2024-12-10 11:19:28.652850] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.702 [2024-12-10 11:19:28.665928] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:06.702 [2024-12-10 11:19:28.666668] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:06.702 [2024-12-10 11:19:28.668859] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.702 [2024-12-10 11:19:28.669126] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:06.702 [2024-12-10 11:19:28.669230] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:06.702 [2024-12-10 11:19:28.672087] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.702 [2024-12-10 11:19:28.672335] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:06.702 [2024-12-10 11:19:28.672437] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:06.702 [2024-12-10 11:19:28.675554] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.702 [2024-12-10 11:19:28.675869] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:06.702 [2024-12-10 11:19:28.675978] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:06.702 [2024-12-10 11:19:28.676055] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:06.702 [2024-12-10 11:19:28.676124] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:07.636 11:19:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:07.636 done. 00:11:07.636 11:19:29 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:07.636 11:19:29 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:07.636 11:19:29 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:07.636 11:19:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.636 11:19:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:07.636 ************************************ 00:11:07.636 START TEST nvme_reset 00:11:07.636 ************************************ 00:11:07.636 11:19:29 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:07.894 Initializing NVMe Controllers 00:11:07.894 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:07.894 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:07.894 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:07.894 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:07.894 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:07.894 ************************************ 00:11:07.894 END TEST nvme_reset 00:11:07.894 ************************************ 00:11:07.894 00:11:07.894 real 0m0.323s 00:11:07.894 user 0m0.112s 00:11:07.894 sys 0m0.165s 00:11:07.894 11:19:29 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.894 11:19:29 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:07.894 11:19:30 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:07.894 11:19:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.894 11:19:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.894 11:19:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:07.894 ************************************ 00:11:07.894 START TEST nvme_identify 00:11:07.894 ************************************ 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:07.894 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:07.894 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:07.894 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:07.894 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:07.894 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:08.152 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:08.152 11:19:30 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.152 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:08.413 ===================================================== 00:11:08.413 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:08.413 ===================================================== 00:11:08.413 Controller Capabilities/Features 00:11:08.413 ================================ 00:11:08.413 Vendor ID: 1b36 00:11:08.413 Subsystem Vendor ID: 1af4 00:11:08.413 Serial Number: 12340 00:11:08.413 Model Number: QEMU NVMe Ctrl 00:11:08.413 Firmware Version: 8.0.0 00:11:08.413 Recommended Arb Burst: 6 00:11:08.413 IEEE OUI Identifier: 00 54 52 00:11:08.413 Multi-path I/O 00:11:08.413 May have multiple subsystem ports: No 00:11:08.413 May have multiple controllers: No 00:11:08.413 Associated with SR-IOV VF: No 00:11:08.413 Max Data Transfer Size: 524288 00:11:08.413 Max Number of Namespaces: 256 00:11:08.413 Max Number of I/O Queues: 64 00:11:08.413 NVMe Specification Version (VS): 1.4 00:11:08.413 NVMe Specification Version (Identify): 1.4 00:11:08.413 Maximum Queue Entries: 2048 00:11:08.413 Contiguous Queues Required: Yes 00:11:08.413 Arbitration Mechanisms Supported 00:11:08.413 Weighted Round Robin: Not Supported 00:11:08.413 Vendor Specific: Not Supported 00:11:08.413 Reset Timeout: 7500 ms 00:11:08.413 Doorbell Stride: 4 bytes 00:11:08.413 NVM Subsystem Reset: Not Supported 00:11:08.413 Command Sets Supported 00:11:08.413 NVM Command Set: Supported 00:11:08.413 Boot Partition: Not Supported 00:11:08.413 Memory Page Size Minimum: 4096 bytes 00:11:08.413 Memory Page Size Maximum: 65536 bytes 00:11:08.413 Persistent Memory Region: Not Supported 00:11:08.413 Optional Asynchronous Events Supported 00:11:08.413 Namespace Attribute Notices: Supported 00:11:08.413 Firmware Activation Notices: Not Supported 00:11:08.413 ANA Change Notices: Not Supported 00:11:08.413 PLE Aggregate Log Change Notices: Not Supported 00:11:08.413 LBA Status Info Alert Notices: Not Supported 00:11:08.413 EGE Aggregate Log Change Notices: Not Supported 00:11:08.413 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.413 Zone Descriptor Change Notices: Not Supported 00:11:08.413 Discovery Log Change Notices: Not Supported 00:11:08.413 Controller Attributes 00:11:08.413 128-bit Host Identifier: Not Supported 00:11:08.413 Non-Operational Permissive Mode: Not Supported 00:11:08.413 NVM Sets: Not Supported 00:11:08.413 Read Recovery Levels: Not Supported 00:11:08.413 Endurance Groups: Not Supported 00:11:08.413 Predictable Latency Mode: Not Supported 00:11:08.413 Traffic Based Keep ALive: Not Supported 00:11:08.413 Namespace Granularity: Not Supported 00:11:08.413 SQ Associations: Not Supported 00:11:08.413 UUID List: Not Supported 00:11:08.413 Multi-Domain Subsystem: Not Supported 00:11:08.413 Fixed Capacity Management: Not Supported 00:11:08.413 Variable Capacity Management: Not Supported 00:11:08.413 Delete Endurance Group: Not Supported 00:11:08.413 Delete NVM Set: Not Supported 00:11:08.413 Extended LBA Formats Supported: Supported 00:11:08.413 Flexible Data Placement Supported: Not Supported 00:11:08.413 00:11:08.413 Controller Memory Buffer Support 00:11:08.413 ================================ 00:11:08.413 Supported: No 00:11:08.413 00:11:08.413 Persistent Memory Region Support 00:11:08.413 ================================ 00:11:08.413 Supported: No 00:11:08.413 00:11:08.413 Admin Command Set Attributes 00:11:08.413 ============================ 00:11:08.413 Security Send/Receive: Not Supported 00:11:08.413 Format NVM: Supported 00:11:08.413 Firmware Activate/Download: Not Supported 00:11:08.414 Namespace Management: Supported 00:11:08.414 Device Self-Test: Not Supported 00:11:08.414 Directives: Supported 00:11:08.414 NVMe-MI: Not Supported 00:11:08.414 Virtualization Management: Not Supported 00:11:08.414 Doorbell Buffer Config: Supported 00:11:08.414 Get LBA Status Capability: Not Supported 00:11:08.414 Command & Feature Lockdown Capability: Not Supported 00:11:08.414 Abort Command Limit: 4 00:11:08.414 Async Event Request Limit: 4 00:11:08.414 Number of Firmware Slots: N/A 00:11:08.414 Firmware Slot 1 Read-Only: N/A 00:11:08.414 Firmware Activation Without Reset: N/A 00:11:08.414 Multiple Update Detection Support: N/A 00:11:08.414 Firmware Update Granularity: No Information Provided 00:11:08.414 Per-Namespace SMART Log: Yes 00:11:08.414 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.414 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:08.414 Command Effects Log Page: Supported 00:11:08.414 Get Log Page Extended Data: Supported 00:11:08.414 Telemetry Log Pages: Not Supported 00:11:08.414 Persistent Event Log Pages: Not Supported 00:11:08.414 Supported Log Pages Log Page: May Support 00:11:08.414 Commands Supported & Effects Log Page: Not Supported 00:11:08.414 Feature Identifiers & Effects Log Page:May Support 00:11:08.414 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.414 Data Area 4 for Telemetry Log: Not Supported 00:11:08.414 Error Log Page Entries Supported: 1 00:11:08.414 Keep Alive: Not Supported 00:11:08.414 00:11:08.414 NVM Command Set Attributes 00:11:08.414 ========================== 00:11:08.414 Submission Queue Entry Size 00:11:08.414 Max: 64 00:11:08.414 Min: 64 00:11:08.414 Completion Queue Entry Size 00:11:08.414 Max: 16 00:11:08.414 Min: 16 00:11:08.414 Number of Namespaces: 256 00:11:08.414 Compare Command: Supported 00:11:08.414 Write Uncorrectable Command: Not Supported 00:11:08.414 Dataset Management Command: Supported 00:11:08.414 Write Zeroes Command: Supported 00:11:08.414 Set Features Save Field: Supported 00:11:08.414 Reservations: Not Supported 00:11:08.414 Timestamp: Supported 00:11:08.414 Copy: Supported 00:11:08.414 Volatile Write Cache: Present 00:11:08.414 Atomic Write Unit (Normal): 1 00:11:08.414 Atomic Write Unit (PFail): 1 00:11:08.414 Atomic Compare & Write Unit: 1 00:11:08.414 Fused Compare & Write: Not Supported 00:11:08.414 Scatter-Gather List 00:11:08.414 SGL Command Set: Supported 00:11:08.414 SGL Keyed: Not Supported 00:11:08.414 SGL Bit Bucket Descriptor: Not Supported 00:11:08.414 SGL Metadata Pointer: Not Supported 00:11:08.414 Oversized SGL: Not Supported 00:11:08.414 SGL Metadata Address: Not Supported 00:11:08.414 SGL Offset: Not Supported 00:11:08.414 Transport SGL Data Block: Not Supported 00:11:08.414 Replay Protected Memory Block: Not Supported 00:11:08.414 00:11:08.414 Firmware Slot Information 00:11:08.414 ========================= 00:11:08.414 Active slot: 1 00:11:08.414 Slot 1 Firmware Revision: 1.0 00:11:08.414 00:11:08.414 00:11:08.414 Commands Supported and Effects 00:11:08.414 ============================== 00:11:08.414 Admin Commands 00:11:08.414 -------------- 00:11:08.414 Delete I/O Submission Queue (00h): Supported 00:11:08.414 Create I/O Submission Queue (01h): Supported 00:11:08.414 Get Log Page (02h): Supported 00:11:08.414 Delete I/O Completion Queue (04h): Supported 00:11:08.414 Create I/O Completion Queue (05h): Supported 00:11:08.414 Identify (06h): Supported 00:11:08.414 Abort (08h): Supported 00:11:08.414 Set Features (09h): Supported 00:11:08.414 Get Features (0Ah): Supported 00:11:08.414 Asynchronous Event Request (0Ch): Supported 00:11:08.414 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.414 Directive Send (19h): Supported 00:11:08.414 Directive Receive (1Ah): Supported 00:11:08.414 Virtualization Management (1Ch): Supported 00:11:08.414 Doorbell Buffer Config (7Ch): Supported 00:11:08.414 Format NVM (80h): Supported LBA-Change 00:11:08.414 I/O Commands 00:11:08.414 ------------ 00:11:08.414 Flush (00h): Supported LBA-Change 00:11:08.414 Write (01h): Supported LBA-Change 00:11:08.414 Read (02h): Supported 00:11:08.414 Compare (05h): Supported 00:11:08.414 Write Zeroes (08h): Supported LBA-Change 00:11:08.414 Dataset Management (09h): Supported LBA-Change 00:11:08.414 Unknown (0Ch): Supported 00:11:08.414 Unknown (12h): Supported 00:11:08.414 Copy (19h): Supported LBA-Change 00:11:08.414 Unknown (1Dh): Supported LBA-Change 00:11:08.414 00:11:08.414 Error Log 00:11:08.414 ========= 00:11:08.414 00:11:08.414 Arbitration 00:11:08.414 =========== 00:11:08.414 Arbitration Burst: no limit 00:11:08.414 00:11:08.414 Power Management 00:11:08.414 ================ 00:11:08.414 Number of Power States: 1 00:11:08.414 Current Power State: Power State #0 00:11:08.414 Power State #0: 00:11:08.414 Max Power: 25.00 W 00:11:08.414 Non-Operational State: Operational 00:11:08.414 Entry Latency: 16 microseconds 00:11:08.414 Exit Latency: 4 microseconds 00:11:08.414 Relative Read Throughput: 0 00:11:08.414 Relative Read Latency: 0 00:11:08.414 Relative Write Throughput: 0 00:11:08.414 Relative Write Latency: 0 00:11:08.414 Idle Power[2024-12-10 11:19:30.371084] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64570 terminated unexpected 00:11:08.414 [2024-12-10 11:19:30.372456] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64570 terminated unexpected 00:11:08.414 : Not Reported 00:11:08.414 Active Power: Not Reported 00:11:08.414 Non-Operational Permissive Mode: Not Supported 00:11:08.414 00:11:08.414 Health Information 00:11:08.414 ================== 00:11:08.414 Critical Warnings: 00:11:08.414 Available Spare Space: OK 00:11:08.414 Temperature: OK 00:11:08.414 Device Reliability: OK 00:11:08.414 Read Only: No 00:11:08.414 Volatile Memory Backup: OK 00:11:08.414 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.414 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.414 Available Spare: 0% 00:11:08.414 Available Spare Threshold: 0% 00:11:08.414 Life Percentage Used: 0% 00:11:08.414 Data Units Read: 651 00:11:08.414 Data Units Written: 579 00:11:08.414 Host Read Commands: 32108 00:11:08.414 Host Write Commands: 31894 00:11:08.414 Controller Busy Time: 0 minutes 00:11:08.414 Power Cycles: 0 00:11:08.414 Power On Hours: 0 hours 00:11:08.414 Unsafe Shutdowns: 0 00:11:08.414 Unrecoverable Media Errors: 0 00:11:08.414 Lifetime Error Log Entries: 0 00:11:08.414 Warning Temperature Time: 0 minutes 00:11:08.414 Critical Temperature Time: 0 minutes 00:11:08.414 00:11:08.414 Number of Queues 00:11:08.414 ================ 00:11:08.414 Number of I/O Submission Queues: 64 00:11:08.414 Number of I/O Completion Queues: 64 00:11:08.414 00:11:08.414 ZNS Specific Controller Data 00:11:08.414 ============================ 00:11:08.414 Zone Append Size Limit: 0 00:11:08.414 00:11:08.414 00:11:08.414 Active Namespaces 00:11:08.414 ================= 00:11:08.414 Namespace ID:1 00:11:08.414 Error Recovery Timeout: Unlimited 00:11:08.414 Command Set Identifier: NVM (00h) 00:11:08.414 Deallocate: Supported 00:11:08.414 Deallocated/Unwritten Error: Supported 00:11:08.414 Deallocated Read Value: All 0x00 00:11:08.414 Deallocate in Write Zeroes: Not Supported 00:11:08.414 Deallocated Guard Field: 0xFFFF 00:11:08.414 Flush: Supported 00:11:08.414 Reservation: Not Supported 00:11:08.414 Metadata Transferred as: Separate Metadata Buffer 00:11:08.414 Namespace Sharing Capabilities: Private 00:11:08.414 Size (in LBAs): 1548666 (5GiB) 00:11:08.414 Capacity (in LBAs): 1548666 (5GiB) 00:11:08.414 Utilization (in LBAs): 1548666 (5GiB) 00:11:08.414 Thin Provisioning: Not Supported 00:11:08.414 Per-NS Atomic Units: No 00:11:08.414 Maximum Single Source Range Length: 128 00:11:08.414 Maximum Copy Length: 128 00:11:08.414 Maximum Source Range Count: 128 00:11:08.414 NGUID/EUI64 Never Reused: No 00:11:08.414 Namespace Write Protected: No 00:11:08.414 Number of LBA Formats: 8 00:11:08.414 Current LBA Format: LBA Format #07 00:11:08.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.414 00:11:08.414 NVM Specific Namespace Data 00:11:08.414 =========================== 00:11:08.414 Logical Block Storage Tag Mask: 0 00:11:08.414 Protection Information Capabilities: 00:11:08.414 16b Guard Protection Information Storage Tag Support: No 00:11:08.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.414 Storage Tag Check Read Support: No 00:11:08.414 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.414 ===================================================== 00:11:08.414 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:08.414 ===================================================== 00:11:08.414 Controller Capabilities/Features 00:11:08.414 ================================ 00:11:08.414 Vendor ID: 1b36 00:11:08.414 Subsystem Vendor ID: 1af4 00:11:08.414 Serial Number: 12341 00:11:08.414 Model Number: QEMU NVMe Ctrl 00:11:08.414 Firmware Version: 8.0.0 00:11:08.414 Recommended Arb Burst: 6 00:11:08.414 IEEE OUI Identifier: 00 54 52 00:11:08.414 Multi-path I/O 00:11:08.414 May have multiple subsystem ports: No 00:11:08.414 May have multiple controllers: No 00:11:08.414 Associated with SR-IOV VF: No 00:11:08.414 Max Data Transfer Size: 524288 00:11:08.414 Max Number of Namespaces: 256 00:11:08.414 Max Number of I/O Queues: 64 00:11:08.414 NVMe Specification Version (VS): 1.4 00:11:08.414 NVMe Specification Version (Identify): 1.4 00:11:08.414 Maximum Queue Entries: 2048 00:11:08.414 Contiguous Queues Required: Yes 00:11:08.414 Arbitration Mechanisms Supported 00:11:08.414 Weighted Round Robin: Not Supported 00:11:08.414 Vendor Specific: Not Supported 00:11:08.414 Reset Timeout: 7500 ms 00:11:08.414 Doorbell Stride: 4 bytes 00:11:08.414 NVM Subsystem Reset: Not Supported 00:11:08.414 Command Sets Supported 00:11:08.414 NVM Command Set: Supported 00:11:08.414 Boot Partition: Not Supported 00:11:08.414 Memory Page Size Minimum: 4096 bytes 00:11:08.414 Memory Page Size Maximum: 65536 bytes 00:11:08.415 Persistent Memory Region: Not Supported 00:11:08.415 Optional Asynchronous Events Supported 00:11:08.415 Namespace Attribute Notices: Supported 00:11:08.415 Firmware Activation Notices: Not Supported 00:11:08.415 ANA Change Notices: Not Supported 00:11:08.415 PLE Aggregate Log Change Notices: Not Supported 00:11:08.415 LBA Status Info Alert Notices: Not Supported 00:11:08.415 EGE Aggregate Log Change Notices: Not Supported 00:11:08.415 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.415 Zone Descriptor Change Notices: Not Supported 00:11:08.415 Discovery Log Change Notices: Not Supported 00:11:08.415 Controller Attributes 00:11:08.415 128-bit Host Identifier: Not Supported 00:11:08.415 Non-Operational Permissive Mode: Not Supported 00:11:08.415 NVM Sets: Not Supported 00:11:08.415 Read Recovery Levels: Not Supported 00:11:08.415 Endurance Groups: Not Supported 00:11:08.415 Predictable Latency Mode: Not Supported 00:11:08.415 Traffic Based Keep ALive: Not Supported 00:11:08.415 Namespace Granularity: Not Supported 00:11:08.415 SQ Associations: Not Supported 00:11:08.415 UUID List: Not Supported 00:11:08.415 Multi-Domain Subsystem: Not Supported 00:11:08.415 Fixed Capacity Management: Not Supported 00:11:08.415 Variable Capacity Management: Not Supported 00:11:08.415 Delete Endurance Group: Not Supported 00:11:08.415 Delete NVM Set: Not Supported 00:11:08.415 Extended LBA Formats Supported: Supported 00:11:08.415 Flexible Data Placement Supported: Not Supported 00:11:08.415 00:11:08.415 Controller Memory Buffer Support 00:11:08.415 ================================ 00:11:08.415 Supported: No 00:11:08.415 00:11:08.415 Persistent Memory Region Support 00:11:08.415 ================================ 00:11:08.415 Supported: No 00:11:08.415 00:11:08.415 Admin Command Set Attributes 00:11:08.415 ============================ 00:11:08.415 Security Send/Receive: Not Supported 00:11:08.415 Format NVM: Supported 00:11:08.415 Firmware Activate/Download: Not Supported 00:11:08.415 Namespace Management: Supported 00:11:08.415 Device Self-Test: Not Supported 00:11:08.415 Directives: Supported 00:11:08.415 NVMe-MI: Not Supported 00:11:08.415 Virtualization Management: Not Supported 00:11:08.415 Doorbell Buffer Config: Supported 00:11:08.415 Get LBA Status Capability: Not Supported 00:11:08.415 Command & Feature Lockdown Capability: Not Supported 00:11:08.415 Abort Command Limit: 4 00:11:08.415 Async Event Request Limit: 4 00:11:08.415 Number of Firmware Slots: N/A 00:11:08.415 Firmware Slot 1 Read-Only: N/A 00:11:08.415 Firmware Activation Without Reset: N/A 00:11:08.415 Multiple Update Detection Support: N/A 00:11:08.415 Firmware Update Granularity: No Information Provided 00:11:08.415 Per-Namespace SMART Log: Yes 00:11:08.415 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.415 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:08.415 Command Effects Log Page: Supported 00:11:08.415 Get Log Page Extended Data: Supported 00:11:08.415 Telemetry Log Pages: Not Supported 00:11:08.415 Persistent Event Log Pages: Not Supported 00:11:08.415 Supported Log Pages Log Page: May Support 00:11:08.415 Commands Supported & Effects Log Page: Not Supported 00:11:08.415 Feature Identifiers & Effects Log Page:May Support 00:11:08.415 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.415 Data Area 4 for Telemetry Log: Not Supported 00:11:08.415 Error Log Page Entries Supported: 1 00:11:08.415 Keep Alive: Not Supported 00:11:08.415 00:11:08.415 NVM Command Set Attributes 00:11:08.415 ========================== 00:11:08.415 Submission Queue Entry Size 00:11:08.415 Max: 64 00:11:08.415 Min: 64 00:11:08.415 Completion Queue Entry Size 00:11:08.415 Max: 16 00:11:08.415 Min: 16 00:11:08.415 Number of Namespaces: 256 00:11:08.415 Compare Command: Supported 00:11:08.415 Write Uncorrectable Command: Not Supported 00:11:08.415 Dataset Management Command: Supported 00:11:08.415 Write Zeroes Command: Supported 00:11:08.415 Set Features Save Field: Supported 00:11:08.415 Reservations: Not Supported 00:11:08.415 Timestamp: Supported 00:11:08.415 Copy: Supported 00:11:08.415 Volatile Write Cache: Present 00:11:08.415 Atomic Write Unit (Normal): 1 00:11:08.415 Atomic Write Unit (PFail): 1 00:11:08.415 Atomic Compare & Write Unit: 1 00:11:08.415 Fused Compare & Write: Not Supported 00:11:08.415 Scatter-Gather List 00:11:08.415 SGL Command Set: Supported 00:11:08.415 SGL Keyed: Not Supported 00:11:08.415 SGL Bit Bucket Descriptor: Not Supported 00:11:08.415 SGL Metadata Pointer: Not Supported 00:11:08.415 Oversized SGL: Not Supported 00:11:08.415 SGL Metadata Address: Not Supported 00:11:08.415 SGL Offset: Not Supported 00:11:08.415 Transport SGL Data Block: Not Supported 00:11:08.415 Replay Protected Memory Block: Not Supported 00:11:08.415 00:11:08.415 Firmware Slot Information 00:11:08.415 ========================= 00:11:08.415 Active slot: 1 00:11:08.415 Slot 1 Firmware Revision: 1.0 00:11:08.415 00:11:08.415 00:11:08.415 Commands Supported and Effects 00:11:08.415 ============================== 00:11:08.415 Admin Commands 00:11:08.415 -------------- 00:11:08.415 Delete I/O Submission Queue (00h): Supported 00:11:08.415 Create I/O Submission Queue (01h): Supported 00:11:08.415 Get Log Page (02h): Supported 00:11:08.415 Delete I/O Completion Queue (04h): Supported 00:11:08.415 Create I/O Completion Queue (05h): Supported 00:11:08.415 Identify (06h): Supported 00:11:08.415 Abort (08h): Supported 00:11:08.415 Set Features (09h): Supported 00:11:08.415 Get Features (0Ah): Supported 00:11:08.415 Asynchronous Event Request (0Ch): Supported 00:11:08.415 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.415 Directive Send (19h): Supported 00:11:08.415 Directive Receive (1Ah): Supported 00:11:08.415 Virtualization Management (1Ch): Supported 00:11:08.415 Doorbell Buffer Config (7Ch): Supported 00:11:08.415 Format NVM (80h): Supported LBA-Change 00:11:08.415 I/O Commands 00:11:08.415 ------------ 00:11:08.415 Flush (00h): Supported LBA-Change 00:11:08.415 Write (01h): Supported LBA-Change 00:11:08.415 Read (02h): Supported 00:11:08.415 Compare (05h): Supported 00:11:08.415 Write Zeroes (08h): Supported LBA-Change 00:11:08.415 Dataset Management (09h): Supported LBA-Change 00:11:08.415 Unknown (0Ch): Supported 00:11:08.415 Unknown (12h): Supported 00:11:08.415 Copy (19h): Supported LBA-Change 00:11:08.415 Unknown (1Dh): Supported LBA-Change 00:11:08.415 00:11:08.415 Error Log 00:11:08.415 ========= 00:11:08.415 00:11:08.415 Arbitration 00:11:08.415 =========== 00:11:08.415 Arbitration Burst: no limit 00:11:08.415 00:11:08.415 Power Management 00:11:08.415 ================ 00:11:08.415 Number of Power States: 1 00:11:08.415 Current Power State: Power State #0 00:11:08.415 Power State #0: 00:11:08.415 Max Power: 25.00 W 00:11:08.415 Non-Operational State: Operational 00:11:08.415 Entry Latency: 16 microseconds 00:11:08.415 Exit Latency: 4 microseconds 00:11:08.415 Relative Read Throughput: 0 00:11:08.415 Relative Read Latency: 0 00:11:08.415 Relative Write Throughput: 0 00:11:08.415 Relative Write Latency: 0 00:11:08.415 Idle Power: Not Reported 00:11:08.415 Active Power: Not Reported 00:11:08.415 Non-Operational Permissive Mode: Not Supported 00:11:08.415 00:11:08.415 Health Information 00:11:08.415 ================== 00:11:08.415 Critical Warnings: 00:11:08.415 Available Spare Space: OK 00:11:08.415 Temperature: [2024-12-10 11:19:30.374139] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64570 terminated unexpected 00:11:08.415 OK 00:11:08.415 Device Reliability: OK 00:11:08.415 Read Only: No 00:11:08.415 Volatile Memory Backup: OK 00:11:08.415 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.415 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.415 Available Spare: 0% 00:11:08.415 Available Spare Threshold: 0% 00:11:08.415 Life Percentage Used: 0% 00:11:08.415 Data Units Read: 1004 00:11:08.415 Data Units Written: 864 00:11:08.415 Host Read Commands: 47539 00:11:08.415 Host Write Commands: 46240 00:11:08.415 Controller Busy Time: 0 minutes 00:11:08.415 Power Cycles: 0 00:11:08.415 Power On Hours: 0 hours 00:11:08.415 Unsafe Shutdowns: 0 00:11:08.415 Unrecoverable Media Errors: 0 00:11:08.415 Lifetime Error Log Entries: 0 00:11:08.415 Warning Temperature Time: 0 minutes 00:11:08.415 Critical Temperature Time: 0 minutes 00:11:08.415 00:11:08.415 Number of Queues 00:11:08.415 ================ 00:11:08.415 Number of I/O Submission Queues: 64 00:11:08.415 Number of I/O Completion Queues: 64 00:11:08.415 00:11:08.415 ZNS Specific Controller Data 00:11:08.415 ============================ 00:11:08.415 Zone Append Size Limit: 0 00:11:08.415 00:11:08.415 00:11:08.415 Active Namespaces 00:11:08.415 ================= 00:11:08.415 Namespace ID:1 00:11:08.415 Error Recovery Timeout: Unlimited 00:11:08.415 Command Set Identifier: NVM (00h) 00:11:08.415 Deallocate: Supported 00:11:08.415 Deallocated/Unwritten Error: Supported 00:11:08.415 Deallocated Read Value: All 0x00 00:11:08.415 Deallocate in Write Zeroes: Not Supported 00:11:08.416 Deallocated Guard Field: 0xFFFF 00:11:08.416 Flush: Supported 00:11:08.416 Reservation: Not Supported 00:11:08.416 Namespace Sharing Capabilities: Private 00:11:08.416 Size (in LBAs): 1310720 (5GiB) 00:11:08.416 Capacity (in LBAs): 1310720 (5GiB) 00:11:08.416 Utilization (in LBAs): 1310720 (5GiB) 00:11:08.416 Thin Provisioning: Not Supported 00:11:08.416 Per-NS Atomic Units: No 00:11:08.416 Maximum Single Source Range Length: 128 00:11:08.416 Maximum Copy Length: 128 00:11:08.416 Maximum Source Range Count: 128 00:11:08.416 NGUID/EUI64 Never Reused: No 00:11:08.416 Namespace Write Protected: No 00:11:08.416 Number of LBA Formats: 8 00:11:08.416 Current LBA Format: LBA Format #04 00:11:08.416 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.416 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.416 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.416 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.416 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.416 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.416 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.416 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.416 00:11:08.416 NVM Specific Namespace Data 00:11:08.416 =========================== 00:11:08.416 Logical Block Storage Tag Mask: 0 00:11:08.416 Protection Information Capabilities: 00:11:08.416 16b Guard Protection Information Storage Tag Support: No 00:11:08.416 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.416 Storage Tag Check Read Support: No 00:11:08.416 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.416 ===================================================== 00:11:08.416 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:08.416 ===================================================== 00:11:08.416 Controller Capabilities/Features 00:11:08.416 ================================ 00:11:08.416 Vendor ID: 1b36 00:11:08.416 Subsystem Vendor ID: 1af4 00:11:08.416 Serial Number: 12343 00:11:08.416 Model Number: QEMU NVMe Ctrl 00:11:08.416 Firmware Version: 8.0.0 00:11:08.416 Recommended Arb Burst: 6 00:11:08.416 IEEE OUI Identifier: 00 54 52 00:11:08.416 Multi-path I/O 00:11:08.416 May have multiple subsystem ports: No 00:11:08.416 May have multiple controllers: Yes 00:11:08.416 Associated with SR-IOV VF: No 00:11:08.416 Max Data Transfer Size: 524288 00:11:08.416 Max Number of Namespaces: 256 00:11:08.416 Max Number of I/O Queues: 64 00:11:08.416 NVMe Specification Version (VS): 1.4 00:11:08.416 NVMe Specification Version (Identify): 1.4 00:11:08.416 Maximum Queue Entries: 2048 00:11:08.416 Contiguous Queues Required: Yes 00:11:08.416 Arbitration Mechanisms Supported 00:11:08.416 Weighted Round Robin: Not Supported 00:11:08.416 Vendor Specific: Not Supported 00:11:08.416 Reset Timeout: 7500 ms 00:11:08.416 Doorbell Stride: 4 bytes 00:11:08.416 NVM Subsystem Reset: Not Supported 00:11:08.416 Command Sets Supported 00:11:08.416 NVM Command Set: Supported 00:11:08.416 Boot Partition: Not Supported 00:11:08.416 Memory Page Size Minimum: 4096 bytes 00:11:08.416 Memory Page Size Maximum: 65536 bytes 00:11:08.416 Persistent Memory Region: Not Supported 00:11:08.416 Optional Asynchronous Events Supported 00:11:08.416 Namespace Attribute Notices: Supported 00:11:08.416 Firmware Activation Notices: Not Supported 00:11:08.416 ANA Change Notices: Not Supported 00:11:08.416 PLE Aggregate Log Change Notices: Not Supported 00:11:08.416 LBA Status Info Alert Notices: Not Supported 00:11:08.416 EGE Aggregate Log Change Notices: Not Supported 00:11:08.416 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.416 Zone Descriptor Change Notices: Not Supported 00:11:08.416 Discovery Log Change Notices: Not Supported 00:11:08.416 Controller Attributes 00:11:08.416 128-bit Host Identifier: Not Supported 00:11:08.416 Non-Operational Permissive Mode: Not Supported 00:11:08.416 NVM Sets: Not Supported 00:11:08.416 Read Recovery Levels: Not Supported 00:11:08.416 Endurance Groups: Supported 00:11:08.416 Predictable Latency Mode: Not Supported 00:11:08.416 Traffic Based Keep ALive: Not Supported 00:11:08.416 Namespace Granularity: Not Supported 00:11:08.416 SQ Associations: Not Supported 00:11:08.416 UUID List: Not Supported 00:11:08.416 Multi-Domain Subsystem: Not Supported 00:11:08.416 Fixed Capacity Management: Not Supported 00:11:08.416 Variable Capacity Management: Not Supported 00:11:08.416 Delete Endurance Group: Not Supported 00:11:08.416 Delete NVM Set: Not Supported 00:11:08.416 Extended LBA Formats Supported: Supported 00:11:08.416 Flexible Data Placement Supported: Supported 00:11:08.416 00:11:08.416 Controller Memory Buffer Support 00:11:08.416 ================================ 00:11:08.416 Supported: No 00:11:08.416 00:11:08.416 Persistent Memory Region Support 00:11:08.416 ================================ 00:11:08.416 Supported: No 00:11:08.416 00:11:08.416 Admin Command Set Attributes 00:11:08.416 ============================ 00:11:08.416 Security Send/Receive: Not Supported 00:11:08.416 Format NVM: Supported 00:11:08.416 Firmware Activate/Download: Not Supported 00:11:08.416 Namespace Management: Supported 00:11:08.416 Device Self-Test: Not Supported 00:11:08.416 Directives: Supported 00:11:08.416 NVMe-MI: Not Supported 00:11:08.416 Virtualization Management: Not Supported 00:11:08.416 Doorbell Buffer Config: Supported 00:11:08.416 Get LBA Status Capability: Not Supported 00:11:08.416 Command & Feature Lockdown Capability: Not Supported 00:11:08.416 Abort Command Limit: 4 00:11:08.416 Async Event Request Limit: 4 00:11:08.416 Number of Firmware Slots: N/A 00:11:08.416 Firmware Slot 1 Read-Only: N/A 00:11:08.416 Firmware Activation Without Reset: N/A 00:11:08.416 Multiple Update Detection Support: N/A 00:11:08.416 Firmware Update Granularity: No Information Provided 00:11:08.416 Per-Namespace SMART Log: Yes 00:11:08.416 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.416 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:08.416 Command Effects Log Page: Supported 00:11:08.416 Get Log Page Extended Data: Supported 00:11:08.416 Telemetry Log Pages: Not Supported 00:11:08.416 Persistent Event Log Pages: Not Supported 00:11:08.416 Supported Log Pages Log Page: May Support 00:11:08.416 Commands Supported & Effects Log Page: Not Supported 00:11:08.416 Feature Identifiers & Effects Log Page:May Support 00:11:08.416 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.416 Data Area 4 for Telemetry Log: Not Supported 00:11:08.416 Error Log Page Entries Supported: 1 00:11:08.416 Keep Alive: Not Supported 00:11:08.416 00:11:08.416 NVM Command Set Attributes 00:11:08.416 ========================== 00:11:08.416 Submission Queue Entry Size 00:11:08.416 Max: 64 00:11:08.416 Min: 64 00:11:08.416 Completion Queue Entry Size 00:11:08.416 Max: 16 00:11:08.416 Min: 16 00:11:08.416 Number of Namespaces: 256 00:11:08.416 Compare Command: Supported 00:11:08.416 Write Uncorrectable Command: Not Supported 00:11:08.416 Dataset Management Command: Supported 00:11:08.416 Write Zeroes Command: Supported 00:11:08.416 Set Features Save Field: Supported 00:11:08.416 Reservations: Not Supported 00:11:08.416 Timestamp: Supported 00:11:08.416 Copy: Supported 00:11:08.416 Volatile Write Cache: Present 00:11:08.416 Atomic Write Unit (Normal): 1 00:11:08.416 Atomic Write Unit (PFail): 1 00:11:08.416 Atomic Compare & Write Unit: 1 00:11:08.416 Fused Compare & Write: Not Supported 00:11:08.416 Scatter-Gather List 00:11:08.416 SGL Command Set: Supported 00:11:08.416 SGL Keyed: Not Supported 00:11:08.416 SGL Bit Bucket Descriptor: Not Supported 00:11:08.416 SGL Metadata Pointer: Not Supported 00:11:08.416 Oversized SGL: Not Supported 00:11:08.416 SGL Metadata Address: Not Supported 00:11:08.416 SGL Offset: Not Supported 00:11:08.416 Transport SGL Data Block: Not Supported 00:11:08.416 Replay Protected Memory Block: Not Supported 00:11:08.416 00:11:08.416 Firmware Slot Information 00:11:08.416 ========================= 00:11:08.416 Active slot: 1 00:11:08.416 Slot 1 Firmware Revision: 1.0 00:11:08.416 00:11:08.416 00:11:08.416 Commands Supported and Effects 00:11:08.416 ============================== 00:11:08.416 Admin Commands 00:11:08.416 -------------- 00:11:08.416 Delete I/O Submission Queue (00h): Supported 00:11:08.416 Create I/O Submission Queue (01h): Supported 00:11:08.416 Get Log Page (02h): Supported 00:11:08.416 Delete I/O Completion Queue (04h): Supported 00:11:08.416 Create I/O Completion Queue (05h): Supported 00:11:08.416 Identify (06h): Supported 00:11:08.416 Abort (08h): Supported 00:11:08.416 Set Features (09h): Supported 00:11:08.416 Get Features (0Ah): Supported 00:11:08.416 Asynchronous Event Request (0Ch): Supported 00:11:08.417 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.417 Directive Send (19h): Supported 00:11:08.417 Directive Receive (1Ah): Supported 00:11:08.417 Virtualization Management (1Ch): Supported 00:11:08.417 Doorbell Buffer Config (7Ch): Supported 00:11:08.417 Format NVM (80h): Supported LBA-Change 00:11:08.417 I/O Commands 00:11:08.417 ------------ 00:11:08.417 Flush (00h): Supported LBA-Change 00:11:08.417 Write (01h): Supported LBA-Change 00:11:08.417 Read (02h): Supported 00:11:08.417 Compare (05h): Supported 00:11:08.417 Write Zeroes (08h): Supported LBA-Change 00:11:08.417 Dataset Management (09h): Supported LBA-Change 00:11:08.417 Unknown (0Ch): Supported 00:11:08.417 Unknown (12h): Supported 00:11:08.417 Copy (19h): Supported LBA-Change 00:11:08.417 Unknown (1Dh): Supported LBA-Change 00:11:08.417 00:11:08.417 Error Log 00:11:08.417 ========= 00:11:08.417 00:11:08.417 Arbitration 00:11:08.417 =========== 00:11:08.417 Arbitration Burst: no limit 00:11:08.417 00:11:08.417 Power Management 00:11:08.417 ================ 00:11:08.417 Number of Power States: 1 00:11:08.417 Current Power State: Power State #0 00:11:08.417 Power State #0: 00:11:08.417 Max Power: 25.00 W 00:11:08.417 Non-Operational State: Operational 00:11:08.417 Entry Latency: 16 microseconds 00:11:08.417 Exit Latency: 4 microseconds 00:11:08.417 Relative Read Throughput: 0 00:11:08.417 Relative Read Latency: 0 00:11:08.417 Relative Write Throughput: 0 00:11:08.417 Relative Write Latency: 0 00:11:08.417 Idle Power: Not Reported 00:11:08.417 Active Power: Not Reported 00:11:08.417 Non-Operational Permissive Mode: Not Supported 00:11:08.417 00:11:08.417 Health Information 00:11:08.417 ================== 00:11:08.417 Critical Warnings: 00:11:08.417 Available Spare Space: OK 00:11:08.417 Temperature: OK 00:11:08.417 Device Reliability: OK 00:11:08.417 Read Only: No 00:11:08.417 Volatile Memory Backup: OK 00:11:08.417 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.417 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.417 Available Spare: 0% 00:11:08.417 Available Spare Threshold: 0% 00:11:08.417 Life Percentage Used: 0% 00:11:08.417 Data Units Read: 722 00:11:08.417 Data Units Written: 651 00:11:08.417 Host Read Commands: 32780 00:11:08.417 Host Write Commands: 32203 00:11:08.417 Controller Busy Time: 0 minutes 00:11:08.417 Power Cycles: 0 00:11:08.417 Power On Hours: 0 hours 00:11:08.417 Unsafe Shutdowns: 0 00:11:08.417 Unrecoverable Media Errors: 0 00:11:08.417 Lifetime Error Log Entries: 0 00:11:08.417 Warning Temperature Time: 0 minutes 00:11:08.417 Critical Temperature Time: 0 minutes 00:11:08.417 00:11:08.417 Number of Queues 00:11:08.417 ================ 00:11:08.417 Number of I/O Submission Queues: 64 00:11:08.417 Number of I/O Completion Queues: 64 00:11:08.417 00:11:08.417 ZNS Specific Controller Data 00:11:08.417 ============================ 00:11:08.417 Zone Append Size Limit: 0 00:11:08.417 00:11:08.417 00:11:08.417 Active Namespaces 00:11:08.417 ================= 00:11:08.417 Namespace ID:1 00:11:08.417 Error Recovery Timeout: Unlimited 00:11:08.417 Command Set Identifier: NVM (00h) 00:11:08.417 Deallocate: Supported 00:11:08.417 Deallocated/Unwritten Error: Supported 00:11:08.417 Deallocated Read Value: All 0x00 00:11:08.417 Deallocate in Write Zeroes: Not Supported 00:11:08.417 Deallocated Guard Field: 0xFFFF 00:11:08.417 Flush: Supported 00:11:08.417 Reservation: Not Supported 00:11:08.417 Namespace Sharing Capabilities: Multiple Controllers 00:11:08.417 Size (in LBAs): 262144 (1GiB) 00:11:08.417 Capacity (in LBAs): 262144 (1GiB) 00:11:08.417 Utilization (in LBAs): 262144 (1GiB) 00:11:08.417 Thin Provisioning: Not Supported 00:11:08.417 Per-NS Atomic Units: No 00:11:08.417 Maximum Single Source Range Length: 128 00:11:08.417 Maximum Copy Length: 128 00:11:08.417 Maximum Source Range Count: 128 00:11:08.417 NGUID/EUI64 Never Reused: No 00:11:08.417 Namespace Write Protected: No 00:11:08.417 Endurance group ID: 1 00:11:08.417 Number of LBA Formats: 8 00:11:08.417 Current LBA Format: LBA Format #04 00:11:08.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.417 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.417 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.417 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.417 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.417 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.417 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.417 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.417 00:11:08.417 Get Feature FDP: 00:11:08.417 ================ 00:11:08.417 Enabled: Yes 00:11:08.417 FDP configuration index: 0 00:11:08.417 00:11:08.417 FDP configurations log page 00:11:08.417 =========================== 00:11:08.417 Number of FDP configurations: 1 00:11:08.417 Version: 0 00:11:08.417 Size: 112 00:11:08.417 FDP Configuration Descriptor: 0 00:11:08.417 Descriptor Size: 96 00:11:08.417 Reclaim Group Identifier format: 2 00:11:08.417 FDP Volatile Write Cache: Not Present 00:11:08.417 FDP Configuration: Valid 00:11:08.417 Vendor Specific Size: 0 00:11:08.417 Number of Reclaim Groups: 2 00:11:08.417 Number of Recalim Unit Handles: 8 00:11:08.417 Max Placement Identifiers: 128 00:11:08.417 Number of Namespaces Suppprted: 256 00:11:08.417 Reclaim unit Nominal Size: 6000000 bytes 00:11:08.417 Estimated Reclaim Unit Time Limit: Not Reported 00:11:08.417 RUH Desc #000: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #001: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #002: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #003: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #004: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #005: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #006: RUH Type: Initially Isolated 00:11:08.417 RUH Desc #007: RUH Type: Initially Isolated 00:11:08.417 00:11:08.417 FDP reclaim unit handle usage log page 00:11:08.417 ====================================== 00:11:08.417 Number of Reclaim Unit Handles: 8 00:11:08.417 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:08.417 RUH Usage Desc #001: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #002: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #003: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #004: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #005: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #006: RUH Attributes: Unused 00:11:08.417 RUH Usage Desc #007: RUH Attributes: Unused 00:11:08.417 00:11:08.417 FDP statistics log page 00:11:08.417 ======================= 00:11:08.417 Host bytes with metadata written: 411607040 00:11:08.417 Media bytes with metadata written: 411652096 00:11:08.417 Media bytes erased: 0 00:11:08.417 00:11:08.417 FDP events log page 00:11:08.417 =================== 00:11:08.417 Number of FDP events: 0 00:11:08.417 00:11:08.417 NVM Specific Namespace Data 00:11:08.417 =========================== 00:11:08.417 Logical Block Storage Tag Mask: 0 00:11:08.417 Protection Information Capabilities: 00:11:08.417 16b Guard Protection Information Storage Tag Support: No 00:11:08.417 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.417 Storage Tag Check Read Support: No 00:11:08.417 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.417 ===================================================== 00:11:08.417 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:08.417 ===================================================== 00:11:08.417 Controller Capabilities/Features 00:11:08.417 ================================ 00:11:08.417 Vendor ID: 1b36 00:11:08.417 Subsystem Vendor ID: 1af4 00:11:08.417 Serial Number: 12342 00:11:08.417 Model Number: QEMU NVMe Ctrl 00:11:08.417 Firmware Version: 8.0.0 00:11:08.417 Recommended Arb Burst: 6 00:11:08.417 IEEE OUI Identifier: 00 54 52 00:11:08.417 Multi-path I/O 00:11:08.417 May have multiple subsystem ports: No 00:11:08.417 May have multiple controllers: No 00:11:08.417 Associated with SR-IOV VF: No 00:11:08.417 Max Data Transfer Size: 524288 00:11:08.417 Max Number of Namespaces: 256 00:11:08.417 Max Number of I/O Queues: 64 00:11:08.417 NVMe Specification Version (VS): 1.4 00:11:08.417 NVMe Specification Version (Identify): 1.4 00:11:08.417 Maximum Queue Entries: 2048 00:11:08.417 Contiguous Queues Required: Yes 00:11:08.417 Arbitration Mechanisms Supported 00:11:08.417 Weighted Round Robin: Not Supported 00:11:08.417 Vendor Specific: Not Supported 00:11:08.417 Reset Timeout: 7500 ms 00:11:08.417 Doorbell Stride: 4 bytes 00:11:08.418 NVM Subsystem Reset: Not Supported 00:11:08.418 Command Sets Supported 00:11:08.418 NVM Command Set: Supported 00:11:08.418 Boot Partition: Not Supported 00:11:08.418 Memory Page Size Minimum: 4096 bytes 00:11:08.418 Memory Page Size Maximum: 65536 bytes 00:11:08.418 Persistent Memory Region: Not Supported 00:11:08.418 Optional Asynchronous Events Supported 00:11:08.418 Namespace Attribute Notices: Supported 00:11:08.418 Firmware Activation Notices: Not Supported 00:11:08.418 ANA Change Notices: Not Supported 00:11:08.418 PLE Aggregate Log Change Notices: Not Supported 00:11:08.418 LBA Status Info Alert Notices: Not Supported 00:11:08.418 EGE Aggregate Log Change Notices: Not Supported 00:11:08.418 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.418 Zone Descriptor Change Notices: Not Supported 00:11:08.418 Discovery Log Change Notices: Not Supported 00:11:08.418 Controller Attributes 00:11:08.418 128-bit Host Identifier: Not Supported 00:11:08.418 Non-Operational Permissive Mode: Not Supported 00:11:08.418 NVM Sets: Not Supported 00:11:08.418 Read Recovery Levels: Not Supported 00:11:08.418 Endurance Groups: Not Supported 00:11:08.418 Predictable Latency Mode: Not Supported 00:11:08.418 Traffic Based Keep ALive: Not Supported 00:11:08.418 Namespace Granularity: Not Supported 00:11:08.418 SQ Associations: Not Supported 00:11:08.418 UUID List: Not Supported 00:11:08.418 Multi-Domain Subsystem: Not Supported 00:11:08.418 Fixed Capacity Management: Not Supported 00:11:08.418 Variable Capacity Management: Not Supported 00:11:08.418 Delete Endurance Group: Not Supported 00:11:08.418 Delete NVM Set: Not Supported 00:11:08.418 Extended LBA Formats Supported: Supported 00:11:08.418 Flexible Data Placement Supported: Not Supported 00:11:08.418 00:11:08.418 Controller Memory Buffer Support 00:11:08.418 ================================ 00:11:08.418 Supported: No 00:11:08.418 00:11:08.418 Persistent Memory Region Support 00:11:08.418 ================================ 00:11:08.418 Supported: No 00:11:08.418 00:11:08.418 Admin Command Set Attributes 00:11:08.418 ============================ 00:11:08.418 Security Send/Receive: Not Supported 00:11:08.418 Format NVM: Supported 00:11:08.418 Firmware Activate/Download: Not Supported 00:11:08.418 Namespace Management: Supported 00:11:08.418 Device Self-Test: Not Supported 00:11:08.418 Directives: Supported 00:11:08.418 NVMe-MI: Not Supported 00:11:08.418 Virtualization Management: Not Supported 00:11:08.418 Doorbell Buffer Config: Supported 00:11:08.418 Get LBA Status Capability: Not Supported 00:11:08.418 Command & Feature Lockdown Capability: Not Supported 00:11:08.418 Abort Command Limit: 4 00:11:08.418 Async Event Request Limit: 4 00:11:08.418 Number of Firmware Slots: N/A 00:11:08.418 Firmware Slot 1 Read-Only: N/A 00:11:08.418 Firmware Activation Without Reset: N/A 00:11:08.418 Multiple Update Detection Support: N/A 00:11:08.418 Firmware Update Granularity: No Information Provided 00:11:08.418 Per-Namespace SMART Log: Yes 00:11:08.418 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.418 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:08.418 Command Effects Log Page: Supported 00:11:08.418 Get Log Page Extended Data: Supported 00:11:08.418 Telemetry Log Pages: Not Supported 00:11:08.418 Persistent Event Log Pages: Not Supported 00:11:08.418 Supported Log Pages Log Page: May Support 00:11:08.418 Commands Supported & Effects Log Page: Not Supported 00:11:08.418 Feature Identifiers & Effects Log Page:May Support 00:11:08.418 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.418 Data Area 4 for Telemetry Log: Not Supported 00:11:08.418 Error Log Page Entries Supported: 1 00:11:08.418 Keep Alive: Not Supported 00:11:08.418 00:11:08.418 NVM Command Set Attributes 00:11:08.418 ========================== 00:11:08.418 Submission Queue Entry Size 00:11:08.418 Max: 64 00:11:08.418 Min: 64 00:11:08.418 Completion Queue Entry Size 00:11:08.418 Max: 16 00:11:08.418 Min: 16 00:11:08.418 Number of Namespaces: 256 00:11:08.418 Compare Command: Supported 00:11:08.418 Write Uncorrectable Command: Not Supported 00:11:08.418 Dataset Management Command: Supported 00:11:08.418 Write Zeroes Command: Supported 00:11:08.418 Set Features Save Field: Supported 00:11:08.418 Reservations: Not Supported 00:11:08.418 Timestamp: Supported 00:11:08.418 Copy: Supported 00:11:08.418 Volatile Write Cache: Present 00:11:08.418 Atomic Write Unit (Normal): 1 00:11:08.418 Atomic Write Unit (PFail): 1 00:11:08.418 Atomic Compare & Write Unit: 1 00:11:08.418 Fused Compare & Write: Not Supported 00:11:08.418 Scatter-Gather List 00:11:08.418 SGL Command Set: Supported 00:11:08.418 SGL Keyed: Not Supported 00:11:08.418 SGL Bit Bucket Descriptor: Not Supported 00:11:08.418 SGL Metadata Pointer: Not Supported 00:11:08.418 Oversized SGL: Not Supported 00:11:08.418 SGL Metadata Address: Not Supported 00:11:08.418 SGL Offset: Not Supported 00:11:08.418 Transport SGL Data Block: Not Supported 00:11:08.418 Replay Protected Memory Block: Not Supported 00:11:08.418 00:11:08.418 Firmware Slot Information 00:11:08.418 ========================= 00:11:08.418 Active slot: 1 00:11:08.418 Slot 1 Firmware Revision: 1.0 00:11:08.418 00:11:08.418 00:11:08.418 Commands Supported and Effects 00:11:08.418 ============================== 00:11:08.418 Admin Commands 00:11:08.418 -------------- 00:11:08.418 Delete I/O Submission Queue (00h): Supported 00:11:08.418 Create I/O Submission Queue (01h): Supported 00:11:08.418 Get Log Page (02h): Supported 00:11:08.418 Delete I/O Completion Queue (04h): Supported 00:11:08.418 Create I/O Completion Queue (05h): Supported 00:11:08.418 Identify (06h): Supported 00:11:08.418 Abort (08h): Supported 00:11:08.418 Set Features (09h): Supported 00:11:08.418 Get Features (0Ah): Supported 00:11:08.418 Asynchronous Event Request (0Ch): Supported 00:11:08.418 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.418 Directive Send (19h): Supported 00:11:08.418 Directive Receive (1Ah): Supported 00:11:08.418 Virtualization Management (1Ch): Supported 00:11:08.418 Doorbel[2024-12-10 11:19:30.377687] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64570 terminated unexpected 00:11:08.418 l Buffer Config (7Ch): Supported 00:11:08.418 Format NVM (80h): Supported LBA-Change 00:11:08.418 I/O Commands 00:11:08.418 ------------ 00:11:08.418 Flush (00h): Supported LBA-Change 00:11:08.418 Write (01h): Supported LBA-Change 00:11:08.418 Read (02h): Supported 00:11:08.418 Compare (05h): Supported 00:11:08.418 Write Zeroes (08h): Supported LBA-Change 00:11:08.418 Dataset Management (09h): Supported LBA-Change 00:11:08.418 Unknown (0Ch): Supported 00:11:08.418 Unknown (12h): Supported 00:11:08.418 Copy (19h): Supported LBA-Change 00:11:08.418 Unknown (1Dh): Supported LBA-Change 00:11:08.418 00:11:08.418 Error Log 00:11:08.418 ========= 00:11:08.418 00:11:08.418 Arbitration 00:11:08.418 =========== 00:11:08.418 Arbitration Burst: no limit 00:11:08.418 00:11:08.418 Power Management 00:11:08.418 ================ 00:11:08.418 Number of Power States: 1 00:11:08.418 Current Power State: Power State #0 00:11:08.418 Power State #0: 00:11:08.418 Max Power: 25.00 W 00:11:08.418 Non-Operational State: Operational 00:11:08.418 Entry Latency: 16 microseconds 00:11:08.418 Exit Latency: 4 microseconds 00:11:08.418 Relative Read Throughput: 0 00:11:08.418 Relative Read Latency: 0 00:11:08.418 Relative Write Throughput: 0 00:11:08.418 Relative Write Latency: 0 00:11:08.418 Idle Power: Not Reported 00:11:08.418 Active Power: Not Reported 00:11:08.418 Non-Operational Permissive Mode: Not Supported 00:11:08.418 00:11:08.419 Health Information 00:11:08.419 ================== 00:11:08.419 Critical Warnings: 00:11:08.419 Available Spare Space: OK 00:11:08.419 Temperature: OK 00:11:08.419 Device Reliability: OK 00:11:08.419 Read Only: No 00:11:08.419 Volatile Memory Backup: OK 00:11:08.419 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.419 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.419 Available Spare: 0% 00:11:08.419 Available Spare Threshold: 0% 00:11:08.419 Life Percentage Used: 0% 00:11:08.419 Data Units Read: 2065 00:11:08.419 Data Units Written: 1852 00:11:08.419 Host Read Commands: 97550 00:11:08.419 Host Write Commands: 95819 00:11:08.419 Controller Busy Time: 0 minutes 00:11:08.419 Power Cycles: 0 00:11:08.419 Power On Hours: 0 hours 00:11:08.419 Unsafe Shutdowns: 0 00:11:08.419 Unrecoverable Media Errors: 0 00:11:08.419 Lifetime Error Log Entries: 0 00:11:08.419 Warning Temperature Time: 0 minutes 00:11:08.419 Critical Temperature Time: 0 minutes 00:11:08.419 00:11:08.419 Number of Queues 00:11:08.419 ================ 00:11:08.419 Number of I/O Submission Queues: 64 00:11:08.419 Number of I/O Completion Queues: 64 00:11:08.419 00:11:08.419 ZNS Specific Controller Data 00:11:08.419 ============================ 00:11:08.419 Zone Append Size Limit: 0 00:11:08.419 00:11:08.419 00:11:08.419 Active Namespaces 00:11:08.419 ================= 00:11:08.419 Namespace ID:1 00:11:08.419 Error Recovery Timeout: Unlimited 00:11:08.419 Command Set Identifier: NVM (00h) 00:11:08.419 Deallocate: Supported 00:11:08.419 Deallocated/Unwritten Error: Supported 00:11:08.419 Deallocated Read Value: All 0x00 00:11:08.419 Deallocate in Write Zeroes: Not Supported 00:11:08.419 Deallocated Guard Field: 0xFFFF 00:11:08.419 Flush: Supported 00:11:08.419 Reservation: Not Supported 00:11:08.419 Namespace Sharing Capabilities: Private 00:11:08.419 Size (in LBAs): 1048576 (4GiB) 00:11:08.419 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.419 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.419 Thin Provisioning: Not Supported 00:11:08.419 Per-NS Atomic Units: No 00:11:08.419 Maximum Single Source Range Length: 128 00:11:08.419 Maximum Copy Length: 128 00:11:08.419 Maximum Source Range Count: 128 00:11:08.419 NGUID/EUI64 Never Reused: No 00:11:08.419 Namespace Write Protected: No 00:11:08.419 Number of LBA Formats: 8 00:11:08.419 Current LBA Format: LBA Format #04 00:11:08.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.419 00:11:08.419 NVM Specific Namespace Data 00:11:08.419 =========================== 00:11:08.419 Logical Block Storage Tag Mask: 0 00:11:08.419 Protection Information Capabilities: 00:11:08.419 16b Guard Protection Information Storage Tag Support: No 00:11:08.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.419 Storage Tag Check Read Support: No 00:11:08.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Namespace ID:2 00:11:08.419 Error Recovery Timeout: Unlimited 00:11:08.419 Command Set Identifier: NVM (00h) 00:11:08.419 Deallocate: Supported 00:11:08.419 Deallocated/Unwritten Error: Supported 00:11:08.419 Deallocated Read Value: All 0x00 00:11:08.419 Deallocate in Write Zeroes: Not Supported 00:11:08.419 Deallocated Guard Field: 0xFFFF 00:11:08.419 Flush: Supported 00:11:08.419 Reservation: Not Supported 00:11:08.419 Namespace Sharing Capabilities: Private 00:11:08.419 Size (in LBAs): 1048576 (4GiB) 00:11:08.419 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.419 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.419 Thin Provisioning: Not Supported 00:11:08.419 Per-NS Atomic Units: No 00:11:08.419 Maximum Single Source Range Length: 128 00:11:08.419 Maximum Copy Length: 128 00:11:08.419 Maximum Source Range Count: 128 00:11:08.419 NGUID/EUI64 Never Reused: No 00:11:08.419 Namespace Write Protected: No 00:11:08.419 Number of LBA Formats: 8 00:11:08.419 Current LBA Format: LBA Format #04 00:11:08.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.419 00:11:08.419 NVM Specific Namespace Data 00:11:08.419 =========================== 00:11:08.419 Logical Block Storage Tag Mask: 0 00:11:08.419 Protection Information Capabilities: 00:11:08.419 16b Guard Protection Information Storage Tag Support: No 00:11:08.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.419 Storage Tag Check Read Support: No 00:11:08.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Namespace ID:3 00:11:08.419 Error Recovery Timeout: Unlimited 00:11:08.419 Command Set Identifier: NVM (00h) 00:11:08.419 Deallocate: Supported 00:11:08.419 Deallocated/Unwritten Error: Supported 00:11:08.419 Deallocated Read Value: All 0x00 00:11:08.419 Deallocate in Write Zeroes: Not Supported 00:11:08.419 Deallocated Guard Field: 0xFFFF 00:11:08.419 Flush: Supported 00:11:08.419 Reservation: Not Supported 00:11:08.419 Namespace Sharing Capabilities: Private 00:11:08.419 Size (in LBAs): 1048576 (4GiB) 00:11:08.419 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.419 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.419 Thin Provisioning: Not Supported 00:11:08.419 Per-NS Atomic Units: No 00:11:08.419 Maximum Single Source Range Length: 128 00:11:08.419 Maximum Copy Length: 128 00:11:08.419 Maximum Source Range Count: 128 00:11:08.419 NGUID/EUI64 Never Reused: No 00:11:08.419 Namespace Write Protected: No 00:11:08.419 Number of LBA Formats: 8 00:11:08.419 Current LBA Format: LBA Format #04 00:11:08.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.419 00:11:08.419 NVM Specific Namespace Data 00:11:08.419 =========================== 00:11:08.419 Logical Block Storage Tag Mask: 0 00:11:08.419 Protection Information Capabilities: 00:11:08.419 16b Guard Protection Information Storage Tag Support: No 00:11:08.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.419 Storage Tag Check Read Support: No 00:11:08.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.419 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:08.419 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:08.679 ===================================================== 00:11:08.679 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:08.679 ===================================================== 00:11:08.679 Controller Capabilities/Features 00:11:08.679 ================================ 00:11:08.679 Vendor ID: 1b36 00:11:08.679 Subsystem Vendor ID: 1af4 00:11:08.679 Serial Number: 12340 00:11:08.679 Model Number: QEMU NVMe Ctrl 00:11:08.679 Firmware Version: 8.0.0 00:11:08.679 Recommended Arb Burst: 6 00:11:08.679 IEEE OUI Identifier: 00 54 52 00:11:08.679 Multi-path I/O 00:11:08.679 May have multiple subsystem ports: No 00:11:08.679 May have multiple controllers: No 00:11:08.679 Associated with SR-IOV VF: No 00:11:08.679 Max Data Transfer Size: 524288 00:11:08.679 Max Number of Namespaces: 256 00:11:08.679 Max Number of I/O Queues: 64 00:11:08.679 NVMe Specification Version (VS): 1.4 00:11:08.679 NVMe Specification Version (Identify): 1.4 00:11:08.679 Maximum Queue Entries: 2048 00:11:08.679 Contiguous Queues Required: Yes 00:11:08.679 Arbitration Mechanisms Supported 00:11:08.679 Weighted Round Robin: Not Supported 00:11:08.679 Vendor Specific: Not Supported 00:11:08.679 Reset Timeout: 7500 ms 00:11:08.679 Doorbell Stride: 4 bytes 00:11:08.679 NVM Subsystem Reset: Not Supported 00:11:08.679 Command Sets Supported 00:11:08.679 NVM Command Set: Supported 00:11:08.679 Boot Partition: Not Supported 00:11:08.679 Memory Page Size Minimum: 4096 bytes 00:11:08.679 Memory Page Size Maximum: 65536 bytes 00:11:08.679 Persistent Memory Region: Not Supported 00:11:08.679 Optional Asynchronous Events Supported 00:11:08.679 Namespace Attribute Notices: Supported 00:11:08.679 Firmware Activation Notices: Not Supported 00:11:08.679 ANA Change Notices: Not Supported 00:11:08.679 PLE Aggregate Log Change Notices: Not Supported 00:11:08.679 LBA Status Info Alert Notices: Not Supported 00:11:08.679 EGE Aggregate Log Change Notices: Not Supported 00:11:08.679 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.679 Zone Descriptor Change Notices: Not Supported 00:11:08.679 Discovery Log Change Notices: Not Supported 00:11:08.679 Controller Attributes 00:11:08.679 128-bit Host Identifier: Not Supported 00:11:08.679 Non-Operational Permissive Mode: Not Supported 00:11:08.679 NVM Sets: Not Supported 00:11:08.679 Read Recovery Levels: Not Supported 00:11:08.679 Endurance Groups: Not Supported 00:11:08.679 Predictable Latency Mode: Not Supported 00:11:08.679 Traffic Based Keep ALive: Not Supported 00:11:08.679 Namespace Granularity: Not Supported 00:11:08.679 SQ Associations: Not Supported 00:11:08.679 UUID List: Not Supported 00:11:08.679 Multi-Domain Subsystem: Not Supported 00:11:08.679 Fixed Capacity Management: Not Supported 00:11:08.679 Variable Capacity Management: Not Supported 00:11:08.679 Delete Endurance Group: Not Supported 00:11:08.679 Delete NVM Set: Not Supported 00:11:08.679 Extended LBA Formats Supported: Supported 00:11:08.679 Flexible Data Placement Supported: Not Supported 00:11:08.679 00:11:08.679 Controller Memory Buffer Support 00:11:08.679 ================================ 00:11:08.679 Supported: No 00:11:08.679 00:11:08.679 Persistent Memory Region Support 00:11:08.679 ================================ 00:11:08.679 Supported: No 00:11:08.679 00:11:08.679 Admin Command Set Attributes 00:11:08.679 ============================ 00:11:08.679 Security Send/Receive: Not Supported 00:11:08.679 Format NVM: Supported 00:11:08.679 Firmware Activate/Download: Not Supported 00:11:08.679 Namespace Management: Supported 00:11:08.679 Device Self-Test: Not Supported 00:11:08.679 Directives: Supported 00:11:08.679 NVMe-MI: Not Supported 00:11:08.679 Virtualization Management: Not Supported 00:11:08.679 Doorbell Buffer Config: Supported 00:11:08.679 Get LBA Status Capability: Not Supported 00:11:08.679 Command & Feature Lockdown Capability: Not Supported 00:11:08.679 Abort Command Limit: 4 00:11:08.679 Async Event Request Limit: 4 00:11:08.679 Number of Firmware Slots: N/A 00:11:08.679 Firmware Slot 1 Read-Only: N/A 00:11:08.679 Firmware Activation Without Reset: N/A 00:11:08.679 Multiple Update Detection Support: N/A 00:11:08.679 Firmware Update Granularity: No Information Provided 00:11:08.679 Per-Namespace SMART Log: Yes 00:11:08.679 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.679 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:08.679 Command Effects Log Page: Supported 00:11:08.679 Get Log Page Extended Data: Supported 00:11:08.679 Telemetry Log Pages: Not Supported 00:11:08.679 Persistent Event Log Pages: Not Supported 00:11:08.679 Supported Log Pages Log Page: May Support 00:11:08.679 Commands Supported & Effects Log Page: Not Supported 00:11:08.679 Feature Identifiers & Effects Log Page:May Support 00:11:08.679 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.679 Data Area 4 for Telemetry Log: Not Supported 00:11:08.679 Error Log Page Entries Supported: 1 00:11:08.679 Keep Alive: Not Supported 00:11:08.679 00:11:08.679 NVM Command Set Attributes 00:11:08.679 ========================== 00:11:08.679 Submission Queue Entry Size 00:11:08.679 Max: 64 00:11:08.679 Min: 64 00:11:08.679 Completion Queue Entry Size 00:11:08.679 Max: 16 00:11:08.679 Min: 16 00:11:08.679 Number of Namespaces: 256 00:11:08.679 Compare Command: Supported 00:11:08.679 Write Uncorrectable Command: Not Supported 00:11:08.679 Dataset Management Command: Supported 00:11:08.679 Write Zeroes Command: Supported 00:11:08.679 Set Features Save Field: Supported 00:11:08.679 Reservations: Not Supported 00:11:08.679 Timestamp: Supported 00:11:08.679 Copy: Supported 00:11:08.679 Volatile Write Cache: Present 00:11:08.679 Atomic Write Unit (Normal): 1 00:11:08.679 Atomic Write Unit (PFail): 1 00:11:08.679 Atomic Compare & Write Unit: 1 00:11:08.679 Fused Compare & Write: Not Supported 00:11:08.679 Scatter-Gather List 00:11:08.679 SGL Command Set: Supported 00:11:08.679 SGL Keyed: Not Supported 00:11:08.679 SGL Bit Bucket Descriptor: Not Supported 00:11:08.679 SGL Metadata Pointer: Not Supported 00:11:08.679 Oversized SGL: Not Supported 00:11:08.679 SGL Metadata Address: Not Supported 00:11:08.679 SGL Offset: Not Supported 00:11:08.679 Transport SGL Data Block: Not Supported 00:11:08.679 Replay Protected Memory Block: Not Supported 00:11:08.679 00:11:08.679 Firmware Slot Information 00:11:08.679 ========================= 00:11:08.679 Active slot: 1 00:11:08.679 Slot 1 Firmware Revision: 1.0 00:11:08.680 00:11:08.680 00:11:08.680 Commands Supported and Effects 00:11:08.680 ============================== 00:11:08.680 Admin Commands 00:11:08.680 -------------- 00:11:08.680 Delete I/O Submission Queue (00h): Supported 00:11:08.680 Create I/O Submission Queue (01h): Supported 00:11:08.680 Get Log Page (02h): Supported 00:11:08.680 Delete I/O Completion Queue (04h): Supported 00:11:08.680 Create I/O Completion Queue (05h): Supported 00:11:08.680 Identify (06h): Supported 00:11:08.680 Abort (08h): Supported 00:11:08.680 Set Features (09h): Supported 00:11:08.680 Get Features (0Ah): Supported 00:11:08.680 Asynchronous Event Request (0Ch): Supported 00:11:08.680 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.680 Directive Send (19h): Supported 00:11:08.680 Directive Receive (1Ah): Supported 00:11:08.680 Virtualization Management (1Ch): Supported 00:11:08.680 Doorbell Buffer Config (7Ch): Supported 00:11:08.680 Format NVM (80h): Supported LBA-Change 00:11:08.680 I/O Commands 00:11:08.680 ------------ 00:11:08.680 Flush (00h): Supported LBA-Change 00:11:08.680 Write (01h): Supported LBA-Change 00:11:08.680 Read (02h): Supported 00:11:08.680 Compare (05h): Supported 00:11:08.680 Write Zeroes (08h): Supported LBA-Change 00:11:08.680 Dataset Management (09h): Supported LBA-Change 00:11:08.680 Unknown (0Ch): Supported 00:11:08.680 Unknown (12h): Supported 00:11:08.680 Copy (19h): Supported LBA-Change 00:11:08.680 Unknown (1Dh): Supported LBA-Change 00:11:08.680 00:11:08.680 Error Log 00:11:08.680 ========= 00:11:08.680 00:11:08.680 Arbitration 00:11:08.680 =========== 00:11:08.680 Arbitration Burst: no limit 00:11:08.680 00:11:08.680 Power Management 00:11:08.680 ================ 00:11:08.680 Number of Power States: 1 00:11:08.680 Current Power State: Power State #0 00:11:08.680 Power State #0: 00:11:08.680 Max Power: 25.00 W 00:11:08.680 Non-Operational State: Operational 00:11:08.680 Entry Latency: 16 microseconds 00:11:08.680 Exit Latency: 4 microseconds 00:11:08.680 Relative Read Throughput: 0 00:11:08.680 Relative Read Latency: 0 00:11:08.680 Relative Write Throughput: 0 00:11:08.680 Relative Write Latency: 0 00:11:08.680 Idle Power: Not Reported 00:11:08.680 Active Power: Not Reported 00:11:08.680 Non-Operational Permissive Mode: Not Supported 00:11:08.680 00:11:08.680 Health Information 00:11:08.680 ================== 00:11:08.680 Critical Warnings: 00:11:08.680 Available Spare Space: OK 00:11:08.680 Temperature: OK 00:11:08.680 Device Reliability: OK 00:11:08.680 Read Only: No 00:11:08.680 Volatile Memory Backup: OK 00:11:08.680 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.680 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.680 Available Spare: 0% 00:11:08.680 Available Spare Threshold: 0% 00:11:08.680 Life Percentage Used: 0% 00:11:08.680 Data Units Read: 651 00:11:08.680 Data Units Written: 579 00:11:08.680 Host Read Commands: 32108 00:11:08.680 Host Write Commands: 31894 00:11:08.680 Controller Busy Time: 0 minutes 00:11:08.680 Power Cycles: 0 00:11:08.680 Power On Hours: 0 hours 00:11:08.680 Unsafe Shutdowns: 0 00:11:08.680 Unrecoverable Media Errors: 0 00:11:08.680 Lifetime Error Log Entries: 0 00:11:08.680 Warning Temperature Time: 0 minutes 00:11:08.680 Critical Temperature Time: 0 minutes 00:11:08.680 00:11:08.680 Number of Queues 00:11:08.680 ================ 00:11:08.680 Number of I/O Submission Queues: 64 00:11:08.680 Number of I/O Completion Queues: 64 00:11:08.680 00:11:08.680 ZNS Specific Controller Data 00:11:08.680 ============================ 00:11:08.680 Zone Append Size Limit: 0 00:11:08.680 00:11:08.680 00:11:08.680 Active Namespaces 00:11:08.680 ================= 00:11:08.680 Namespace ID:1 00:11:08.680 Error Recovery Timeout: Unlimited 00:11:08.680 Command Set Identifier: NVM (00h) 00:11:08.680 Deallocate: Supported 00:11:08.680 Deallocated/Unwritten Error: Supported 00:11:08.680 Deallocated Read Value: All 0x00 00:11:08.680 Deallocate in Write Zeroes: Not Supported 00:11:08.680 Deallocated Guard Field: 0xFFFF 00:11:08.680 Flush: Supported 00:11:08.680 Reservation: Not Supported 00:11:08.680 Metadata Transferred as: Separate Metadata Buffer 00:11:08.680 Namespace Sharing Capabilities: Private 00:11:08.680 Size (in LBAs): 1548666 (5GiB) 00:11:08.680 Capacity (in LBAs): 1548666 (5GiB) 00:11:08.680 Utilization (in LBAs): 1548666 (5GiB) 00:11:08.680 Thin Provisioning: Not Supported 00:11:08.680 Per-NS Atomic Units: No 00:11:08.680 Maximum Single Source Range Length: 128 00:11:08.680 Maximum Copy Length: 128 00:11:08.680 Maximum Source Range Count: 128 00:11:08.680 NGUID/EUI64 Never Reused: No 00:11:08.680 Namespace Write Protected: No 00:11:08.680 Number of LBA Formats: 8 00:11:08.680 Current LBA Format: LBA Format #07 00:11:08.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.680 00:11:08.680 NVM Specific Namespace Data 00:11:08.680 =========================== 00:11:08.680 Logical Block Storage Tag Mask: 0 00:11:08.680 Protection Information Capabilities: 00:11:08.680 16b Guard Protection Information Storage Tag Support: No 00:11:08.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.680 Storage Tag Check Read Support: No 00:11:08.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.680 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:08.680 11:19:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:08.939 ===================================================== 00:11:08.939 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:08.939 ===================================================== 00:11:08.939 Controller Capabilities/Features 00:11:08.939 ================================ 00:11:08.939 Vendor ID: 1b36 00:11:08.939 Subsystem Vendor ID: 1af4 00:11:08.939 Serial Number: 12341 00:11:08.939 Model Number: QEMU NVMe Ctrl 00:11:08.939 Firmware Version: 8.0.0 00:11:08.939 Recommended Arb Burst: 6 00:11:08.939 IEEE OUI Identifier: 00 54 52 00:11:08.939 Multi-path I/O 00:11:08.939 May have multiple subsystem ports: No 00:11:08.939 May have multiple controllers: No 00:11:08.939 Associated with SR-IOV VF: No 00:11:08.939 Max Data Transfer Size: 524288 00:11:08.939 Max Number of Namespaces: 256 00:11:08.939 Max Number of I/O Queues: 64 00:11:08.939 NVMe Specification Version (VS): 1.4 00:11:08.940 NVMe Specification Version (Identify): 1.4 00:11:08.940 Maximum Queue Entries: 2048 00:11:08.940 Contiguous Queues Required: Yes 00:11:08.940 Arbitration Mechanisms Supported 00:11:08.940 Weighted Round Robin: Not Supported 00:11:08.940 Vendor Specific: Not Supported 00:11:08.940 Reset Timeout: 7500 ms 00:11:08.940 Doorbell Stride: 4 bytes 00:11:08.940 NVM Subsystem Reset: Not Supported 00:11:08.940 Command Sets Supported 00:11:08.940 NVM Command Set: Supported 00:11:08.940 Boot Partition: Not Supported 00:11:08.940 Memory Page Size Minimum: 4096 bytes 00:11:08.940 Memory Page Size Maximum: 65536 bytes 00:11:08.940 Persistent Memory Region: Not Supported 00:11:08.940 Optional Asynchronous Events Supported 00:11:08.940 Namespace Attribute Notices: Supported 00:11:08.940 Firmware Activation Notices: Not Supported 00:11:08.940 ANA Change Notices: Not Supported 00:11:08.940 PLE Aggregate Log Change Notices: Not Supported 00:11:08.940 LBA Status Info Alert Notices: Not Supported 00:11:08.940 EGE Aggregate Log Change Notices: Not Supported 00:11:08.940 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.940 Zone Descriptor Change Notices: Not Supported 00:11:08.940 Discovery Log Change Notices: Not Supported 00:11:08.940 Controller Attributes 00:11:08.940 128-bit Host Identifier: Not Supported 00:11:08.940 Non-Operational Permissive Mode: Not Supported 00:11:08.940 NVM Sets: Not Supported 00:11:08.940 Read Recovery Levels: Not Supported 00:11:08.940 Endurance Groups: Not Supported 00:11:08.940 Predictable Latency Mode: Not Supported 00:11:08.940 Traffic Based Keep ALive: Not Supported 00:11:08.940 Namespace Granularity: Not Supported 00:11:08.940 SQ Associations: Not Supported 00:11:08.940 UUID List: Not Supported 00:11:08.940 Multi-Domain Subsystem: Not Supported 00:11:08.940 Fixed Capacity Management: Not Supported 00:11:08.940 Variable Capacity Management: Not Supported 00:11:08.940 Delete Endurance Group: Not Supported 00:11:08.940 Delete NVM Set: Not Supported 00:11:08.940 Extended LBA Formats Supported: Supported 00:11:08.940 Flexible Data Placement Supported: Not Supported 00:11:08.940 00:11:08.940 Controller Memory Buffer Support 00:11:08.940 ================================ 00:11:08.940 Supported: No 00:11:08.940 00:11:08.940 Persistent Memory Region Support 00:11:08.940 ================================ 00:11:08.940 Supported: No 00:11:08.940 00:11:08.940 Admin Command Set Attributes 00:11:08.940 ============================ 00:11:08.940 Security Send/Receive: Not Supported 00:11:08.940 Format NVM: Supported 00:11:08.940 Firmware Activate/Download: Not Supported 00:11:08.940 Namespace Management: Supported 00:11:08.940 Device Self-Test: Not Supported 00:11:08.940 Directives: Supported 00:11:08.940 NVMe-MI: Not Supported 00:11:08.940 Virtualization Management: Not Supported 00:11:08.940 Doorbell Buffer Config: Supported 00:11:08.940 Get LBA Status Capability: Not Supported 00:11:08.940 Command & Feature Lockdown Capability: Not Supported 00:11:08.940 Abort Command Limit: 4 00:11:08.940 Async Event Request Limit: 4 00:11:08.940 Number of Firmware Slots: N/A 00:11:08.940 Firmware Slot 1 Read-Only: N/A 00:11:08.940 Firmware Activation Without Reset: N/A 00:11:08.940 Multiple Update Detection Support: N/A 00:11:08.940 Firmware Update Granularity: No Information Provided 00:11:08.940 Per-Namespace SMART Log: Yes 00:11:08.940 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.940 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:08.940 Command Effects Log Page: Supported 00:11:08.940 Get Log Page Extended Data: Supported 00:11:08.940 Telemetry Log Pages: Not Supported 00:11:08.940 Persistent Event Log Pages: Not Supported 00:11:08.940 Supported Log Pages Log Page: May Support 00:11:08.940 Commands Supported & Effects Log Page: Not Supported 00:11:08.940 Feature Identifiers & Effects Log Page:May Support 00:11:08.940 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.940 Data Area 4 for Telemetry Log: Not Supported 00:11:08.940 Error Log Page Entries Supported: 1 00:11:08.940 Keep Alive: Not Supported 00:11:08.940 00:11:08.940 NVM Command Set Attributes 00:11:08.940 ========================== 00:11:08.940 Submission Queue Entry Size 00:11:08.940 Max: 64 00:11:08.940 Min: 64 00:11:08.940 Completion Queue Entry Size 00:11:08.940 Max: 16 00:11:08.940 Min: 16 00:11:08.940 Number of Namespaces: 256 00:11:08.940 Compare Command: Supported 00:11:08.940 Write Uncorrectable Command: Not Supported 00:11:08.940 Dataset Management Command: Supported 00:11:08.940 Write Zeroes Command: Supported 00:11:08.940 Set Features Save Field: Supported 00:11:08.940 Reservations: Not Supported 00:11:08.940 Timestamp: Supported 00:11:08.940 Copy: Supported 00:11:08.940 Volatile Write Cache: Present 00:11:08.940 Atomic Write Unit (Normal): 1 00:11:08.940 Atomic Write Unit (PFail): 1 00:11:08.940 Atomic Compare & Write Unit: 1 00:11:08.940 Fused Compare & Write: Not Supported 00:11:08.940 Scatter-Gather List 00:11:08.940 SGL Command Set: Supported 00:11:08.940 SGL Keyed: Not Supported 00:11:08.940 SGL Bit Bucket Descriptor: Not Supported 00:11:08.940 SGL Metadata Pointer: Not Supported 00:11:08.940 Oversized SGL: Not Supported 00:11:08.940 SGL Metadata Address: Not Supported 00:11:08.940 SGL Offset: Not Supported 00:11:08.940 Transport SGL Data Block: Not Supported 00:11:08.940 Replay Protected Memory Block: Not Supported 00:11:08.940 00:11:08.940 Firmware Slot Information 00:11:08.940 ========================= 00:11:08.940 Active slot: 1 00:11:08.940 Slot 1 Firmware Revision: 1.0 00:11:08.940 00:11:08.940 00:11:08.940 Commands Supported and Effects 00:11:08.940 ============================== 00:11:08.940 Admin Commands 00:11:08.940 -------------- 00:11:08.940 Delete I/O Submission Queue (00h): Supported 00:11:08.940 Create I/O Submission Queue (01h): Supported 00:11:08.940 Get Log Page (02h): Supported 00:11:08.940 Delete I/O Completion Queue (04h): Supported 00:11:08.940 Create I/O Completion Queue (05h): Supported 00:11:08.940 Identify (06h): Supported 00:11:08.940 Abort (08h): Supported 00:11:08.940 Set Features (09h): Supported 00:11:08.940 Get Features (0Ah): Supported 00:11:08.940 Asynchronous Event Request (0Ch): Supported 00:11:08.940 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.940 Directive Send (19h): Supported 00:11:08.940 Directive Receive (1Ah): Supported 00:11:08.940 Virtualization Management (1Ch): Supported 00:11:08.940 Doorbell Buffer Config (7Ch): Supported 00:11:08.940 Format NVM (80h): Supported LBA-Change 00:11:08.940 I/O Commands 00:11:08.940 ------------ 00:11:08.940 Flush (00h): Supported LBA-Change 00:11:08.940 Write (01h): Supported LBA-Change 00:11:08.940 Read (02h): Supported 00:11:08.940 Compare (05h): Supported 00:11:08.940 Write Zeroes (08h): Supported LBA-Change 00:11:08.940 Dataset Management (09h): Supported LBA-Change 00:11:08.940 Unknown (0Ch): Supported 00:11:08.940 Unknown (12h): Supported 00:11:08.940 Copy (19h): Supported LBA-Change 00:11:08.940 Unknown (1Dh): Supported LBA-Change 00:11:08.940 00:11:08.940 Error Log 00:11:08.940 ========= 00:11:08.940 00:11:08.940 Arbitration 00:11:08.940 =========== 00:11:08.940 Arbitration Burst: no limit 00:11:08.940 00:11:08.940 Power Management 00:11:08.940 ================ 00:11:08.940 Number of Power States: 1 00:11:08.940 Current Power State: Power State #0 00:11:08.940 Power State #0: 00:11:08.940 Max Power: 25.00 W 00:11:08.940 Non-Operational State: Operational 00:11:08.940 Entry Latency: 16 microseconds 00:11:08.940 Exit Latency: 4 microseconds 00:11:08.940 Relative Read Throughput: 0 00:11:08.940 Relative Read Latency: 0 00:11:08.940 Relative Write Throughput: 0 00:11:08.940 Relative Write Latency: 0 00:11:08.940 Idle Power: Not Reported 00:11:08.940 Active Power: Not Reported 00:11:08.940 Non-Operational Permissive Mode: Not Supported 00:11:08.940 00:11:08.940 Health Information 00:11:08.940 ================== 00:11:08.940 Critical Warnings: 00:11:08.940 Available Spare Space: OK 00:11:08.940 Temperature: OK 00:11:08.940 Device Reliability: OK 00:11:08.940 Read Only: No 00:11:08.940 Volatile Memory Backup: OK 00:11:08.940 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.940 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.940 Available Spare: 0% 00:11:08.940 Available Spare Threshold: 0% 00:11:08.940 Life Percentage Used: 0% 00:11:08.940 Data Units Read: 1004 00:11:08.940 Data Units Written: 864 00:11:08.940 Host Read Commands: 47539 00:11:08.940 Host Write Commands: 46240 00:11:08.940 Controller Busy Time: 0 minutes 00:11:08.940 Power Cycles: 0 00:11:08.940 Power On Hours: 0 hours 00:11:08.940 Unsafe Shutdowns: 0 00:11:08.940 Unrecoverable Media Errors: 0 00:11:08.940 Lifetime Error Log Entries: 0 00:11:08.940 Warning Temperature Time: 0 minutes 00:11:08.940 Critical Temperature Time: 0 minutes 00:11:08.940 00:11:08.940 Number of Queues 00:11:08.940 ================ 00:11:08.941 Number of I/O Submission Queues: 64 00:11:08.941 Number of I/O Completion Queues: 64 00:11:08.941 00:11:08.941 ZNS Specific Controller Data 00:11:08.941 ============================ 00:11:08.941 Zone Append Size Limit: 0 00:11:08.941 00:11:08.941 00:11:08.941 Active Namespaces 00:11:08.941 ================= 00:11:08.941 Namespace ID:1 00:11:08.941 Error Recovery Timeout: Unlimited 00:11:08.941 Command Set Identifier: NVM (00h) 00:11:08.941 Deallocate: Supported 00:11:08.941 Deallocated/Unwritten Error: Supported 00:11:08.941 Deallocated Read Value: All 0x00 00:11:08.941 Deallocate in Write Zeroes: Not Supported 00:11:08.941 Deallocated Guard Field: 0xFFFF 00:11:08.941 Flush: Supported 00:11:08.941 Reservation: Not Supported 00:11:08.941 Namespace Sharing Capabilities: Private 00:11:08.941 Size (in LBAs): 1310720 (5GiB) 00:11:08.941 Capacity (in LBAs): 1310720 (5GiB) 00:11:08.941 Utilization (in LBAs): 1310720 (5GiB) 00:11:08.941 Thin Provisioning: Not Supported 00:11:08.941 Per-NS Atomic Units: No 00:11:08.941 Maximum Single Source Range Length: 128 00:11:08.941 Maximum Copy Length: 128 00:11:08.941 Maximum Source Range Count: 128 00:11:08.941 NGUID/EUI64 Never Reused: No 00:11:08.941 Namespace Write Protected: No 00:11:08.941 Number of LBA Formats: 8 00:11:08.941 Current LBA Format: LBA Format #04 00:11:08.941 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.941 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.941 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.941 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.941 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.941 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.941 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.941 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.941 00:11:08.941 NVM Specific Namespace Data 00:11:08.941 =========================== 00:11:08.941 Logical Block Storage Tag Mask: 0 00:11:08.941 Protection Information Capabilities: 00:11:08.941 16b Guard Protection Information Storage Tag Support: No 00:11:08.941 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:08.941 Storage Tag Check Read Support: No 00:11:08.941 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:08.941 11:19:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:08.941 11:19:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:09.508 ===================================================== 00:11:09.508 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.508 ===================================================== 00:11:09.508 Controller Capabilities/Features 00:11:09.508 ================================ 00:11:09.508 Vendor ID: 1b36 00:11:09.508 Subsystem Vendor ID: 1af4 00:11:09.508 Serial Number: 12342 00:11:09.508 Model Number: QEMU NVMe Ctrl 00:11:09.508 Firmware Version: 8.0.0 00:11:09.508 Recommended Arb Burst: 6 00:11:09.508 IEEE OUI Identifier: 00 54 52 00:11:09.508 Multi-path I/O 00:11:09.508 May have multiple subsystem ports: No 00:11:09.508 May have multiple controllers: No 00:11:09.508 Associated with SR-IOV VF: No 00:11:09.508 Max Data Transfer Size: 524288 00:11:09.508 Max Number of Namespaces: 256 00:11:09.508 Max Number of I/O Queues: 64 00:11:09.508 NVMe Specification Version (VS): 1.4 00:11:09.508 NVMe Specification Version (Identify): 1.4 00:11:09.508 Maximum Queue Entries: 2048 00:11:09.508 Contiguous Queues Required: Yes 00:11:09.508 Arbitration Mechanisms Supported 00:11:09.508 Weighted Round Robin: Not Supported 00:11:09.508 Vendor Specific: Not Supported 00:11:09.508 Reset Timeout: 7500 ms 00:11:09.508 Doorbell Stride: 4 bytes 00:11:09.508 NVM Subsystem Reset: Not Supported 00:11:09.508 Command Sets Supported 00:11:09.508 NVM Command Set: Supported 00:11:09.508 Boot Partition: Not Supported 00:11:09.508 Memory Page Size Minimum: 4096 bytes 00:11:09.508 Memory Page Size Maximum: 65536 bytes 00:11:09.508 Persistent Memory Region: Not Supported 00:11:09.508 Optional Asynchronous Events Supported 00:11:09.508 Namespace Attribute Notices: Supported 00:11:09.508 Firmware Activation Notices: Not Supported 00:11:09.508 ANA Change Notices: Not Supported 00:11:09.508 PLE Aggregate Log Change Notices: Not Supported 00:11:09.508 LBA Status Info Alert Notices: Not Supported 00:11:09.508 EGE Aggregate Log Change Notices: Not Supported 00:11:09.508 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.508 Zone Descriptor Change Notices: Not Supported 00:11:09.508 Discovery Log Change Notices: Not Supported 00:11:09.508 Controller Attributes 00:11:09.508 128-bit Host Identifier: Not Supported 00:11:09.508 Non-Operational Permissive Mode: Not Supported 00:11:09.508 NVM Sets: Not Supported 00:11:09.508 Read Recovery Levels: Not Supported 00:11:09.508 Endurance Groups: Not Supported 00:11:09.508 Predictable Latency Mode: Not Supported 00:11:09.508 Traffic Based Keep ALive: Not Supported 00:11:09.508 Namespace Granularity: Not Supported 00:11:09.508 SQ Associations: Not Supported 00:11:09.508 UUID List: Not Supported 00:11:09.508 Multi-Domain Subsystem: Not Supported 00:11:09.508 Fixed Capacity Management: Not Supported 00:11:09.508 Variable Capacity Management: Not Supported 00:11:09.508 Delete Endurance Group: Not Supported 00:11:09.508 Delete NVM Set: Not Supported 00:11:09.508 Extended LBA Formats Supported: Supported 00:11:09.508 Flexible Data Placement Supported: Not Supported 00:11:09.508 00:11:09.508 Controller Memory Buffer Support 00:11:09.508 ================================ 00:11:09.508 Supported: No 00:11:09.508 00:11:09.508 Persistent Memory Region Support 00:11:09.508 ================================ 00:11:09.508 Supported: No 00:11:09.508 00:11:09.508 Admin Command Set Attributes 00:11:09.508 ============================ 00:11:09.508 Security Send/Receive: Not Supported 00:11:09.508 Format NVM: Supported 00:11:09.508 Firmware Activate/Download: Not Supported 00:11:09.508 Namespace Management: Supported 00:11:09.508 Device Self-Test: Not Supported 00:11:09.508 Directives: Supported 00:11:09.508 NVMe-MI: Not Supported 00:11:09.508 Virtualization Management: Not Supported 00:11:09.508 Doorbell Buffer Config: Supported 00:11:09.508 Get LBA Status Capability: Not Supported 00:11:09.508 Command & Feature Lockdown Capability: Not Supported 00:11:09.508 Abort Command Limit: 4 00:11:09.508 Async Event Request Limit: 4 00:11:09.508 Number of Firmware Slots: N/A 00:11:09.508 Firmware Slot 1 Read-Only: N/A 00:11:09.508 Firmware Activation Without Reset: N/A 00:11:09.508 Multiple Update Detection Support: N/A 00:11:09.508 Firmware Update Granularity: No Information Provided 00:11:09.508 Per-Namespace SMART Log: Yes 00:11:09.508 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.508 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:09.508 Command Effects Log Page: Supported 00:11:09.508 Get Log Page Extended Data: Supported 00:11:09.508 Telemetry Log Pages: Not Supported 00:11:09.508 Persistent Event Log Pages: Not Supported 00:11:09.508 Supported Log Pages Log Page: May Support 00:11:09.508 Commands Supported & Effects Log Page: Not Supported 00:11:09.508 Feature Identifiers & Effects Log Page:May Support 00:11:09.508 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.508 Data Area 4 for Telemetry Log: Not Supported 00:11:09.508 Error Log Page Entries Supported: 1 00:11:09.508 Keep Alive: Not Supported 00:11:09.508 00:11:09.508 NVM Command Set Attributes 00:11:09.508 ========================== 00:11:09.509 Submission Queue Entry Size 00:11:09.509 Max: 64 00:11:09.509 Min: 64 00:11:09.509 Completion Queue Entry Size 00:11:09.509 Max: 16 00:11:09.509 Min: 16 00:11:09.509 Number of Namespaces: 256 00:11:09.509 Compare Command: Supported 00:11:09.509 Write Uncorrectable Command: Not Supported 00:11:09.509 Dataset Management Command: Supported 00:11:09.509 Write Zeroes Command: Supported 00:11:09.509 Set Features Save Field: Supported 00:11:09.509 Reservations: Not Supported 00:11:09.509 Timestamp: Supported 00:11:09.509 Copy: Supported 00:11:09.509 Volatile Write Cache: Present 00:11:09.509 Atomic Write Unit (Normal): 1 00:11:09.509 Atomic Write Unit (PFail): 1 00:11:09.509 Atomic Compare & Write Unit: 1 00:11:09.509 Fused Compare & Write: Not Supported 00:11:09.509 Scatter-Gather List 00:11:09.509 SGL Command Set: Supported 00:11:09.509 SGL Keyed: Not Supported 00:11:09.509 SGL Bit Bucket Descriptor: Not Supported 00:11:09.509 SGL Metadata Pointer: Not Supported 00:11:09.509 Oversized SGL: Not Supported 00:11:09.509 SGL Metadata Address: Not Supported 00:11:09.509 SGL Offset: Not Supported 00:11:09.509 Transport SGL Data Block: Not Supported 00:11:09.509 Replay Protected Memory Block: Not Supported 00:11:09.509 00:11:09.509 Firmware Slot Information 00:11:09.509 ========================= 00:11:09.509 Active slot: 1 00:11:09.509 Slot 1 Firmware Revision: 1.0 00:11:09.509 00:11:09.509 00:11:09.509 Commands Supported and Effects 00:11:09.509 ============================== 00:11:09.509 Admin Commands 00:11:09.509 -------------- 00:11:09.509 Delete I/O Submission Queue (00h): Supported 00:11:09.509 Create I/O Submission Queue (01h): Supported 00:11:09.509 Get Log Page (02h): Supported 00:11:09.509 Delete I/O Completion Queue (04h): Supported 00:11:09.509 Create I/O Completion Queue (05h): Supported 00:11:09.509 Identify (06h): Supported 00:11:09.509 Abort (08h): Supported 00:11:09.509 Set Features (09h): Supported 00:11:09.509 Get Features (0Ah): Supported 00:11:09.509 Asynchronous Event Request (0Ch): Supported 00:11:09.509 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.509 Directive Send (19h): Supported 00:11:09.509 Directive Receive (1Ah): Supported 00:11:09.509 Virtualization Management (1Ch): Supported 00:11:09.509 Doorbell Buffer Config (7Ch): Supported 00:11:09.509 Format NVM (80h): Supported LBA-Change 00:11:09.509 I/O Commands 00:11:09.509 ------------ 00:11:09.509 Flush (00h): Supported LBA-Change 00:11:09.509 Write (01h): Supported LBA-Change 00:11:09.509 Read (02h): Supported 00:11:09.509 Compare (05h): Supported 00:11:09.509 Write Zeroes (08h): Supported LBA-Change 00:11:09.509 Dataset Management (09h): Supported LBA-Change 00:11:09.509 Unknown (0Ch): Supported 00:11:09.509 Unknown (12h): Supported 00:11:09.509 Copy (19h): Supported LBA-Change 00:11:09.509 Unknown (1Dh): Supported LBA-Change 00:11:09.509 00:11:09.509 Error Log 00:11:09.509 ========= 00:11:09.509 00:11:09.509 Arbitration 00:11:09.509 =========== 00:11:09.509 Arbitration Burst: no limit 00:11:09.509 00:11:09.509 Power Management 00:11:09.509 ================ 00:11:09.509 Number of Power States: 1 00:11:09.509 Current Power State: Power State #0 00:11:09.509 Power State #0: 00:11:09.509 Max Power: 25.00 W 00:11:09.509 Non-Operational State: Operational 00:11:09.509 Entry Latency: 16 microseconds 00:11:09.509 Exit Latency: 4 microseconds 00:11:09.509 Relative Read Throughput: 0 00:11:09.509 Relative Read Latency: 0 00:11:09.509 Relative Write Throughput: 0 00:11:09.509 Relative Write Latency: 0 00:11:09.509 Idle Power: Not Reported 00:11:09.509 Active Power: Not Reported 00:11:09.509 Non-Operational Permissive Mode: Not Supported 00:11:09.509 00:11:09.509 Health Information 00:11:09.509 ================== 00:11:09.509 Critical Warnings: 00:11:09.509 Available Spare Space: OK 00:11:09.509 Temperature: OK 00:11:09.509 Device Reliability: OK 00:11:09.509 Read Only: No 00:11:09.509 Volatile Memory Backup: OK 00:11:09.509 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.509 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.509 Available Spare: 0% 00:11:09.509 Available Spare Threshold: 0% 00:11:09.509 Life Percentage Used: 0% 00:11:09.509 Data Units Read: 2065 00:11:09.509 Data Units Written: 1852 00:11:09.509 Host Read Commands: 97550 00:11:09.509 Host Write Commands: 95819 00:11:09.509 Controller Busy Time: 0 minutes 00:11:09.509 Power Cycles: 0 00:11:09.509 Power On Hours: 0 hours 00:11:09.509 Unsafe Shutdowns: 0 00:11:09.509 Unrecoverable Media Errors: 0 00:11:09.509 Lifetime Error Log Entries: 0 00:11:09.509 Warning Temperature Time: 0 minutes 00:11:09.509 Critical Temperature Time: 0 minutes 00:11:09.509 00:11:09.509 Number of Queues 00:11:09.509 ================ 00:11:09.509 Number of I/O Submission Queues: 64 00:11:09.509 Number of I/O Completion Queues: 64 00:11:09.509 00:11:09.509 ZNS Specific Controller Data 00:11:09.509 ============================ 00:11:09.509 Zone Append Size Limit: 0 00:11:09.509 00:11:09.509 00:11:09.509 Active Namespaces 00:11:09.509 ================= 00:11:09.509 Namespace ID:1 00:11:09.509 Error Recovery Timeout: Unlimited 00:11:09.509 Command Set Identifier: NVM (00h) 00:11:09.509 Deallocate: Supported 00:11:09.509 Deallocated/Unwritten Error: Supported 00:11:09.509 Deallocated Read Value: All 0x00 00:11:09.509 Deallocate in Write Zeroes: Not Supported 00:11:09.509 Deallocated Guard Field: 0xFFFF 00:11:09.509 Flush: Supported 00:11:09.509 Reservation: Not Supported 00:11:09.509 Namespace Sharing Capabilities: Private 00:11:09.509 Size (in LBAs): 1048576 (4GiB) 00:11:09.509 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.509 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.509 Thin Provisioning: Not Supported 00:11:09.509 Per-NS Atomic Units: No 00:11:09.509 Maximum Single Source Range Length: 128 00:11:09.509 Maximum Copy Length: 128 00:11:09.509 Maximum Source Range Count: 128 00:11:09.509 NGUID/EUI64 Never Reused: No 00:11:09.509 Namespace Write Protected: No 00:11:09.509 Number of LBA Formats: 8 00:11:09.509 Current LBA Format: LBA Format #04 00:11:09.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.509 00:11:09.509 NVM Specific Namespace Data 00:11:09.509 =========================== 00:11:09.509 Logical Block Storage Tag Mask: 0 00:11:09.509 Protection Information Capabilities: 00:11:09.509 16b Guard Protection Information Storage Tag Support: No 00:11:09.509 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.509 Storage Tag Check Read Support: No 00:11:09.509 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.509 Namespace ID:2 00:11:09.509 Error Recovery Timeout: Unlimited 00:11:09.509 Command Set Identifier: NVM (00h) 00:11:09.509 Deallocate: Supported 00:11:09.509 Deallocated/Unwritten Error: Supported 00:11:09.509 Deallocated Read Value: All 0x00 00:11:09.509 Deallocate in Write Zeroes: Not Supported 00:11:09.509 Deallocated Guard Field: 0xFFFF 00:11:09.509 Flush: Supported 00:11:09.509 Reservation: Not Supported 00:11:09.509 Namespace Sharing Capabilities: Private 00:11:09.509 Size (in LBAs): 1048576 (4GiB) 00:11:09.509 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.509 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.509 Thin Provisioning: Not Supported 00:11:09.509 Per-NS Atomic Units: No 00:11:09.509 Maximum Single Source Range Length: 128 00:11:09.509 Maximum Copy Length: 128 00:11:09.509 Maximum Source Range Count: 128 00:11:09.509 NGUID/EUI64 Never Reused: No 00:11:09.509 Namespace Write Protected: No 00:11:09.509 Number of LBA Formats: 8 00:11:09.509 Current LBA Format: LBA Format #04 00:11:09.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.510 00:11:09.510 NVM Specific Namespace Data 00:11:09.510 =========================== 00:11:09.510 Logical Block Storage Tag Mask: 0 00:11:09.510 Protection Information Capabilities: 00:11:09.510 16b Guard Protection Information Storage Tag Support: No 00:11:09.510 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.510 Storage Tag Check Read Support: No 00:11:09.510 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Namespace ID:3 00:11:09.510 Error Recovery Timeout: Unlimited 00:11:09.510 Command Set Identifier: NVM (00h) 00:11:09.510 Deallocate: Supported 00:11:09.510 Deallocated/Unwritten Error: Supported 00:11:09.510 Deallocated Read Value: All 0x00 00:11:09.510 Deallocate in Write Zeroes: Not Supported 00:11:09.510 Deallocated Guard Field: 0xFFFF 00:11:09.510 Flush: Supported 00:11:09.510 Reservation: Not Supported 00:11:09.510 Namespace Sharing Capabilities: Private 00:11:09.510 Size (in LBAs): 1048576 (4GiB) 00:11:09.510 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.510 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.510 Thin Provisioning: Not Supported 00:11:09.510 Per-NS Atomic Units: No 00:11:09.510 Maximum Single Source Range Length: 128 00:11:09.510 Maximum Copy Length: 128 00:11:09.510 Maximum Source Range Count: 128 00:11:09.510 NGUID/EUI64 Never Reused: No 00:11:09.510 Namespace Write Protected: No 00:11:09.510 Number of LBA Formats: 8 00:11:09.510 Current LBA Format: LBA Format #04 00:11:09.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.510 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.510 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.510 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.510 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.510 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.510 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.510 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.510 00:11:09.510 NVM Specific Namespace Data 00:11:09.510 =========================== 00:11:09.510 Logical Block Storage Tag Mask: 0 00:11:09.510 Protection Information Capabilities: 00:11:09.510 16b Guard Protection Information Storage Tag Support: No 00:11:09.510 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.510 Storage Tag Check Read Support: No 00:11:09.510 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.510 11:19:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:09.510 11:19:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:09.769 ===================================================== 00:11:09.769 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:09.769 ===================================================== 00:11:09.769 Controller Capabilities/Features 00:11:09.769 ================================ 00:11:09.769 Vendor ID: 1b36 00:11:09.769 Subsystem Vendor ID: 1af4 00:11:09.769 Serial Number: 12343 00:11:09.769 Model Number: QEMU NVMe Ctrl 00:11:09.769 Firmware Version: 8.0.0 00:11:09.769 Recommended Arb Burst: 6 00:11:09.769 IEEE OUI Identifier: 00 54 52 00:11:09.769 Multi-path I/O 00:11:09.769 May have multiple subsystem ports: No 00:11:09.769 May have multiple controllers: Yes 00:11:09.769 Associated with SR-IOV VF: No 00:11:09.769 Max Data Transfer Size: 524288 00:11:09.769 Max Number of Namespaces: 256 00:11:09.769 Max Number of I/O Queues: 64 00:11:09.769 NVMe Specification Version (VS): 1.4 00:11:09.769 NVMe Specification Version (Identify): 1.4 00:11:09.769 Maximum Queue Entries: 2048 00:11:09.769 Contiguous Queues Required: Yes 00:11:09.769 Arbitration Mechanisms Supported 00:11:09.769 Weighted Round Robin: Not Supported 00:11:09.769 Vendor Specific: Not Supported 00:11:09.769 Reset Timeout: 7500 ms 00:11:09.769 Doorbell Stride: 4 bytes 00:11:09.769 NVM Subsystem Reset: Not Supported 00:11:09.769 Command Sets Supported 00:11:09.769 NVM Command Set: Supported 00:11:09.769 Boot Partition: Not Supported 00:11:09.769 Memory Page Size Minimum: 4096 bytes 00:11:09.769 Memory Page Size Maximum: 65536 bytes 00:11:09.769 Persistent Memory Region: Not Supported 00:11:09.769 Optional Asynchronous Events Supported 00:11:09.769 Namespace Attribute Notices: Supported 00:11:09.769 Firmware Activation Notices: Not Supported 00:11:09.769 ANA Change Notices: Not Supported 00:11:09.769 PLE Aggregate Log Change Notices: Not Supported 00:11:09.769 LBA Status Info Alert Notices: Not Supported 00:11:09.769 EGE Aggregate Log Change Notices: Not Supported 00:11:09.769 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.769 Zone Descriptor Change Notices: Not Supported 00:11:09.769 Discovery Log Change Notices: Not Supported 00:11:09.769 Controller Attributes 00:11:09.769 128-bit Host Identifier: Not Supported 00:11:09.769 Non-Operational Permissive Mode: Not Supported 00:11:09.769 NVM Sets: Not Supported 00:11:09.769 Read Recovery Levels: Not Supported 00:11:09.769 Endurance Groups: Supported 00:11:09.769 Predictable Latency Mode: Not Supported 00:11:09.769 Traffic Based Keep ALive: Not Supported 00:11:09.769 Namespace Granularity: Not Supported 00:11:09.769 SQ Associations: Not Supported 00:11:09.769 UUID List: Not Supported 00:11:09.769 Multi-Domain Subsystem: Not Supported 00:11:09.769 Fixed Capacity Management: Not Supported 00:11:09.769 Variable Capacity Management: Not Supported 00:11:09.769 Delete Endurance Group: Not Supported 00:11:09.769 Delete NVM Set: Not Supported 00:11:09.769 Extended LBA Formats Supported: Supported 00:11:09.769 Flexible Data Placement Supported: Supported 00:11:09.769 00:11:09.769 Controller Memory Buffer Support 00:11:09.769 ================================ 00:11:09.769 Supported: No 00:11:09.769 00:11:09.769 Persistent Memory Region Support 00:11:09.769 ================================ 00:11:09.769 Supported: No 00:11:09.769 00:11:09.769 Admin Command Set Attributes 00:11:09.769 ============================ 00:11:09.769 Security Send/Receive: Not Supported 00:11:09.769 Format NVM: Supported 00:11:09.769 Firmware Activate/Download: Not Supported 00:11:09.769 Namespace Management: Supported 00:11:09.769 Device Self-Test: Not Supported 00:11:09.769 Directives: Supported 00:11:09.769 NVMe-MI: Not Supported 00:11:09.769 Virtualization Management: Not Supported 00:11:09.769 Doorbell Buffer Config: Supported 00:11:09.769 Get LBA Status Capability: Not Supported 00:11:09.770 Command & Feature Lockdown Capability: Not Supported 00:11:09.770 Abort Command Limit: 4 00:11:09.770 Async Event Request Limit: 4 00:11:09.770 Number of Firmware Slots: N/A 00:11:09.770 Firmware Slot 1 Read-Only: N/A 00:11:09.770 Firmware Activation Without Reset: N/A 00:11:09.770 Multiple Update Detection Support: N/A 00:11:09.770 Firmware Update Granularity: No Information Provided 00:11:09.770 Per-Namespace SMART Log: Yes 00:11:09.770 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.770 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:09.770 Command Effects Log Page: Supported 00:11:09.770 Get Log Page Extended Data: Supported 00:11:09.770 Telemetry Log Pages: Not Supported 00:11:09.770 Persistent Event Log Pages: Not Supported 00:11:09.770 Supported Log Pages Log Page: May Support 00:11:09.770 Commands Supported & Effects Log Page: Not Supported 00:11:09.770 Feature Identifiers & Effects Log Page:May Support 00:11:09.770 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.770 Data Area 4 for Telemetry Log: Not Supported 00:11:09.770 Error Log Page Entries Supported: 1 00:11:09.770 Keep Alive: Not Supported 00:11:09.770 00:11:09.770 NVM Command Set Attributes 00:11:09.770 ========================== 00:11:09.770 Submission Queue Entry Size 00:11:09.770 Max: 64 00:11:09.770 Min: 64 00:11:09.770 Completion Queue Entry Size 00:11:09.770 Max: 16 00:11:09.770 Min: 16 00:11:09.770 Number of Namespaces: 256 00:11:09.770 Compare Command: Supported 00:11:09.770 Write Uncorrectable Command: Not Supported 00:11:09.770 Dataset Management Command: Supported 00:11:09.770 Write Zeroes Command: Supported 00:11:09.770 Set Features Save Field: Supported 00:11:09.770 Reservations: Not Supported 00:11:09.770 Timestamp: Supported 00:11:09.770 Copy: Supported 00:11:09.770 Volatile Write Cache: Present 00:11:09.770 Atomic Write Unit (Normal): 1 00:11:09.770 Atomic Write Unit (PFail): 1 00:11:09.770 Atomic Compare & Write Unit: 1 00:11:09.770 Fused Compare & Write: Not Supported 00:11:09.770 Scatter-Gather List 00:11:09.770 SGL Command Set: Supported 00:11:09.770 SGL Keyed: Not Supported 00:11:09.770 SGL Bit Bucket Descriptor: Not Supported 00:11:09.770 SGL Metadata Pointer: Not Supported 00:11:09.770 Oversized SGL: Not Supported 00:11:09.770 SGL Metadata Address: Not Supported 00:11:09.770 SGL Offset: Not Supported 00:11:09.770 Transport SGL Data Block: Not Supported 00:11:09.770 Replay Protected Memory Block: Not Supported 00:11:09.770 00:11:09.770 Firmware Slot Information 00:11:09.770 ========================= 00:11:09.770 Active slot: 1 00:11:09.770 Slot 1 Firmware Revision: 1.0 00:11:09.770 00:11:09.770 00:11:09.770 Commands Supported and Effects 00:11:09.770 ============================== 00:11:09.770 Admin Commands 00:11:09.770 -------------- 00:11:09.770 Delete I/O Submission Queue (00h): Supported 00:11:09.770 Create I/O Submission Queue (01h): Supported 00:11:09.770 Get Log Page (02h): Supported 00:11:09.770 Delete I/O Completion Queue (04h): Supported 00:11:09.770 Create I/O Completion Queue (05h): Supported 00:11:09.770 Identify (06h): Supported 00:11:09.770 Abort (08h): Supported 00:11:09.770 Set Features (09h): Supported 00:11:09.770 Get Features (0Ah): Supported 00:11:09.770 Asynchronous Event Request (0Ch): Supported 00:11:09.770 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.770 Directive Send (19h): Supported 00:11:09.770 Directive Receive (1Ah): Supported 00:11:09.770 Virtualization Management (1Ch): Supported 00:11:09.770 Doorbell Buffer Config (7Ch): Supported 00:11:09.770 Format NVM (80h): Supported LBA-Change 00:11:09.770 I/O Commands 00:11:09.770 ------------ 00:11:09.770 Flush (00h): Supported LBA-Change 00:11:09.770 Write (01h): Supported LBA-Change 00:11:09.770 Read (02h): Supported 00:11:09.770 Compare (05h): Supported 00:11:09.770 Write Zeroes (08h): Supported LBA-Change 00:11:09.770 Dataset Management (09h): Supported LBA-Change 00:11:09.770 Unknown (0Ch): Supported 00:11:09.770 Unknown (12h): Supported 00:11:09.770 Copy (19h): Supported LBA-Change 00:11:09.770 Unknown (1Dh): Supported LBA-Change 00:11:09.770 00:11:09.770 Error Log 00:11:09.770 ========= 00:11:09.770 00:11:09.770 Arbitration 00:11:09.770 =========== 00:11:09.770 Arbitration Burst: no limit 00:11:09.770 00:11:09.770 Power Management 00:11:09.770 ================ 00:11:09.770 Number of Power States: 1 00:11:09.770 Current Power State: Power State #0 00:11:09.770 Power State #0: 00:11:09.770 Max Power: 25.00 W 00:11:09.770 Non-Operational State: Operational 00:11:09.770 Entry Latency: 16 microseconds 00:11:09.770 Exit Latency: 4 microseconds 00:11:09.770 Relative Read Throughput: 0 00:11:09.770 Relative Read Latency: 0 00:11:09.770 Relative Write Throughput: 0 00:11:09.770 Relative Write Latency: 0 00:11:09.770 Idle Power: Not Reported 00:11:09.770 Active Power: Not Reported 00:11:09.770 Non-Operational Permissive Mode: Not Supported 00:11:09.770 00:11:09.770 Health Information 00:11:09.770 ================== 00:11:09.770 Critical Warnings: 00:11:09.770 Available Spare Space: OK 00:11:09.770 Temperature: OK 00:11:09.770 Device Reliability: OK 00:11:09.770 Read Only: No 00:11:09.770 Volatile Memory Backup: OK 00:11:09.770 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.770 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.770 Available Spare: 0% 00:11:09.770 Available Spare Threshold: 0% 00:11:09.770 Life Percentage Used: 0% 00:11:09.770 Data Units Read: 722 00:11:09.770 Data Units Written: 651 00:11:09.770 Host Read Commands: 32780 00:11:09.770 Host Write Commands: 32203 00:11:09.770 Controller Busy Time: 0 minutes 00:11:09.770 Power Cycles: 0 00:11:09.770 Power On Hours: 0 hours 00:11:09.770 Unsafe Shutdowns: 0 00:11:09.770 Unrecoverable Media Errors: 0 00:11:09.770 Lifetime Error Log Entries: 0 00:11:09.770 Warning Temperature Time: 0 minutes 00:11:09.770 Critical Temperature Time: 0 minutes 00:11:09.770 00:11:09.770 Number of Queues 00:11:09.770 ================ 00:11:09.770 Number of I/O Submission Queues: 64 00:11:09.770 Number of I/O Completion Queues: 64 00:11:09.770 00:11:09.770 ZNS Specific Controller Data 00:11:09.770 ============================ 00:11:09.770 Zone Append Size Limit: 0 00:11:09.770 00:11:09.770 00:11:09.770 Active Namespaces 00:11:09.770 ================= 00:11:09.770 Namespace ID:1 00:11:09.770 Error Recovery Timeout: Unlimited 00:11:09.770 Command Set Identifier: NVM (00h) 00:11:09.770 Deallocate: Supported 00:11:09.770 Deallocated/Unwritten Error: Supported 00:11:09.770 Deallocated Read Value: All 0x00 00:11:09.770 Deallocate in Write Zeroes: Not Supported 00:11:09.770 Deallocated Guard Field: 0xFFFF 00:11:09.770 Flush: Supported 00:11:09.770 Reservation: Not Supported 00:11:09.770 Namespace Sharing Capabilities: Multiple Controllers 00:11:09.770 Size (in LBAs): 262144 (1GiB) 00:11:09.770 Capacity (in LBAs): 262144 (1GiB) 00:11:09.770 Utilization (in LBAs): 262144 (1GiB) 00:11:09.770 Thin Provisioning: Not Supported 00:11:09.770 Per-NS Atomic Units: No 00:11:09.770 Maximum Single Source Range Length: 128 00:11:09.770 Maximum Copy Length: 128 00:11:09.770 Maximum Source Range Count: 128 00:11:09.770 NGUID/EUI64 Never Reused: No 00:11:09.770 Namespace Write Protected: No 00:11:09.770 Endurance group ID: 1 00:11:09.770 Number of LBA Formats: 8 00:11:09.770 Current LBA Format: LBA Format #04 00:11:09.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.770 00:11:09.770 Get Feature FDP: 00:11:09.770 ================ 00:11:09.770 Enabled: Yes 00:11:09.770 FDP configuration index: 0 00:11:09.770 00:11:09.770 FDP configurations log page 00:11:09.770 =========================== 00:11:09.770 Number of FDP configurations: 1 00:11:09.770 Version: 0 00:11:09.770 Size: 112 00:11:09.770 FDP Configuration Descriptor: 0 00:11:09.770 Descriptor Size: 96 00:11:09.770 Reclaim Group Identifier format: 2 00:11:09.770 FDP Volatile Write Cache: Not Present 00:11:09.770 FDP Configuration: Valid 00:11:09.770 Vendor Specific Size: 0 00:11:09.770 Number of Reclaim Groups: 2 00:11:09.770 Number of Recalim Unit Handles: 8 00:11:09.770 Max Placement Identifiers: 128 00:11:09.770 Number of Namespaces Suppprted: 256 00:11:09.770 Reclaim unit Nominal Size: 6000000 bytes 00:11:09.770 Estimated Reclaim Unit Time Limit: Not Reported 00:11:09.770 RUH Desc #000: RUH Type: Initially Isolated 00:11:09.770 RUH Desc #001: RUH Type: Initially Isolated 00:11:09.770 RUH Desc #002: RUH Type: Initially Isolated 00:11:09.770 RUH Desc #003: RUH Type: Initially Isolated 00:11:09.770 RUH Desc #004: RUH Type: Initially Isolated 00:11:09.771 RUH Desc #005: RUH Type: Initially Isolated 00:11:09.771 RUH Desc #006: RUH Type: Initially Isolated 00:11:09.771 RUH Desc #007: RUH Type: Initially Isolated 00:11:09.771 00:11:09.771 FDP reclaim unit handle usage log page 00:11:09.771 ====================================== 00:11:09.771 Number of Reclaim Unit Handles: 8 00:11:09.771 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:09.771 RUH Usage Desc #001: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #002: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #003: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #004: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #005: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #006: RUH Attributes: Unused 00:11:09.771 RUH Usage Desc #007: RUH Attributes: Unused 00:11:09.771 00:11:09.771 FDP statistics log page 00:11:09.771 ======================= 00:11:09.771 Host bytes with metadata written: 411607040 00:11:09.771 Media bytes with metadata written: 411652096 00:11:09.771 Media bytes erased: 0 00:11:09.771 00:11:09.771 FDP events log page 00:11:09.771 =================== 00:11:09.771 Number of FDP events: 0 00:11:09.771 00:11:09.771 NVM Specific Namespace Data 00:11:09.771 =========================== 00:11:09.771 Logical Block Storage Tag Mask: 0 00:11:09.771 Protection Information Capabilities: 00:11:09.771 16b Guard Protection Information Storage Tag Support: No 00:11:09.771 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.771 Storage Tag Check Read Support: No 00:11:09.771 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.771 ************************************ 00:11:09.771 END TEST nvme_identify 00:11:09.771 ************************************ 00:11:09.771 00:11:09.771 real 0m1.787s 00:11:09.771 user 0m0.767s 00:11:09.771 sys 0m0.795s 00:11:09.771 11:19:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.771 11:19:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:09.771 11:19:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:09.771 11:19:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:09.771 11:19:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.771 11:19:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:09.771 ************************************ 00:11:09.771 START TEST nvme_perf 00:11:09.771 ************************************ 00:11:09.771 11:19:31 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:09.771 11:19:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:11.225 Initializing NVMe Controllers 00:11:11.225 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:11.225 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:11.225 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:11.225 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:11.225 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:11.225 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:11.225 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:11.225 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:11.225 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:11.225 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:11.225 Initialization complete. Launching workers. 00:11:11.225 ======================================================== 00:11:11.225 Latency(us) 00:11:11.225 Device Information : IOPS MiB/s Average min max 00:11:11.225 PCIE (0000:00:10.0) NSID 1 from core 0: 12307.26 144.23 10414.33 8206.00 39116.06 00:11:11.225 PCIE (0000:00:11.0) NSID 1 from core 0: 12307.26 144.23 10390.20 8222.71 36606.29 00:11:11.225 PCIE (0000:00:13.0) NSID 1 from core 0: 12307.26 144.23 10364.21 8344.08 34607.35 00:11:11.225 PCIE (0000:00:12.0) NSID 1 from core 0: 12307.26 144.23 10337.29 8362.55 31961.52 00:11:11.225 PCIE (0000:00:12.0) NSID 2 from core 0: 12307.26 144.23 10310.87 8347.71 29278.75 00:11:11.225 PCIE (0000:00:12.0) NSID 3 from core 0: 12307.26 144.23 10284.15 8348.07 26603.33 00:11:11.225 ======================================================== 00:11:11.225 Total : 73843.58 865.35 10350.17 8206.00 39116.06 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8519.680us 00:11:11.225 10.00000% : 8936.727us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9830.400us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12094.371us 00:11:11.225 95.00000% : 13285.935us 00:11:11.225 98.00000% : 14596.655us 00:11:11.225 99.00000% : 28954.996us 00:11:11.225 99.50000% : 36700.160us 00:11:11.225 99.90000% : 38844.975us 00:11:11.225 99.99000% : 39083.287us 00:11:11.225 99.99900% : 39321.600us 00:11:11.225 99.99990% : 39321.600us 00:11:11.225 99.99999% : 39321.600us 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8579.258us 00:11:11.225 10.00000% : 8996.305us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9770.822us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12153.949us 00:11:11.225 95.00000% : 13285.935us 00:11:11.225 98.00000% : 14596.655us 00:11:11.225 99.00000% : 27048.495us 00:11:11.225 99.50000% : 34317.033us 00:11:11.225 99.90000% : 36223.535us 00:11:11.225 99.99000% : 36700.160us 00:11:11.225 99.99900% : 36700.160us 00:11:11.225 99.99990% : 36700.160us 00:11:11.225 99.99999% : 36700.160us 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8638.836us 00:11:11.225 10.00000% : 8996.305us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9770.822us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12034.793us 00:11:11.225 95.00000% : 13285.935us 00:11:11.225 98.00000% : 14417.920us 00:11:11.225 99.00000% : 25022.836us 00:11:11.225 99.50000% : 32410.531us 00:11:11.225 99.90000% : 34317.033us 00:11:11.225 99.99000% : 34793.658us 00:11:11.225 99.99900% : 34793.658us 00:11:11.225 99.99990% : 34793.658us 00:11:11.225 99.99999% : 34793.658us 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8638.836us 00:11:11.225 10.00000% : 8996.305us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9830.400us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12094.371us 00:11:11.225 95.00000% : 13285.935us 00:11:11.225 98.00000% : 14179.607us 00:11:11.225 99.00000% : 22401.396us 00:11:11.225 99.50000% : 29669.935us 00:11:11.225 99.90000% : 31695.593us 00:11:11.225 99.99000% : 31933.905us 00:11:11.225 99.99900% : 32172.218us 00:11:11.225 99.99990% : 32172.218us 00:11:11.225 99.99999% : 32172.218us 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8638.836us 00:11:11.225 10.00000% : 8996.305us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9830.400us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12094.371us 00:11:11.225 95.00000% : 13345.513us 00:11:11.225 98.00000% : 14298.764us 00:11:11.225 99.00000% : 19779.956us 00:11:11.225 99.50000% : 27048.495us 00:11:11.225 99.90000% : 28835.840us 00:11:11.225 99.99000% : 29312.465us 00:11:11.225 99.99900% : 29312.465us 00:11:11.225 99.99990% : 29312.465us 00:11:11.225 99.99999% : 29312.465us 00:11:11.225 00:11:11.225 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.225 ================================================================================= 00:11:11.225 1.00000% : 8638.836us 00:11:11.225 10.00000% : 8996.305us 00:11:11.225 25.00000% : 9294.196us 00:11:11.225 50.00000% : 9830.400us 00:11:11.225 75.00000% : 10843.229us 00:11:11.225 90.00000% : 12094.371us 00:11:11.225 95.00000% : 13226.356us 00:11:11.225 98.00000% : 14417.920us 00:11:11.225 99.00000% : 17158.516us 00:11:11.225 99.50000% : 24427.055us 00:11:11.225 99.90000% : 26214.400us 00:11:11.225 99.99000% : 26571.869us 00:11:11.225 99.99900% : 26691.025us 00:11:11.225 99.99990% : 26691.025us 00:11:11.225 99.99999% : 26691.025us 00:11:11.225 00:11:11.225 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.225 ============================================================================== 00:11:11.225 Range in us Cumulative IO count 00:11:11.225 8162.211 - 8221.789: 0.0324% ( 4) 00:11:11.225 8221.789 - 8281.367: 0.0729% ( 5) 00:11:11.225 8281.367 - 8340.945: 0.2591% ( 23) 00:11:11.225 8340.945 - 8400.524: 0.4534% ( 24) 00:11:11.225 8400.524 - 8460.102: 0.7448% ( 36) 00:11:11.225 8460.102 - 8519.680: 1.1658% ( 52) 00:11:11.225 8519.680 - 8579.258: 1.7892% ( 77) 00:11:11.225 8579.258 - 8638.836: 2.6878% ( 111) 00:11:11.225 8638.836 - 8698.415: 3.9022% ( 150) 00:11:11.225 8698.415 - 8757.993: 5.3514% ( 179) 00:11:11.225 8757.993 - 8817.571: 7.0434% ( 209) 00:11:11.225 8817.571 - 8877.149: 9.0350% ( 246) 00:11:11.225 8877.149 - 8936.727: 11.1561% ( 262) 00:11:11.225 8936.727 - 8996.305: 13.4958% ( 289) 00:11:11.225 8996.305 - 9055.884: 15.9974% ( 309) 00:11:11.225 9055.884 - 9115.462: 18.6205% ( 324) 00:11:11.225 9115.462 - 9175.040: 21.3164% ( 333) 00:11:11.225 9175.040 - 9234.618: 23.9233% ( 322) 00:11:11.225 9234.618 - 9294.196: 26.5220% ( 321) 00:11:11.225 9294.196 - 9353.775: 29.2422% ( 336) 00:11:11.225 9353.775 - 9413.353: 32.0839% ( 351) 00:11:11.225 9413.353 - 9472.931: 34.7474% ( 329) 00:11:11.225 9472.931 - 9532.509: 37.4109% ( 329) 00:11:11.226 9532.509 - 9592.087: 40.1392% ( 337) 00:11:11.226 9592.087 - 9651.665: 42.8514% ( 335) 00:11:11.226 9651.665 - 9711.244: 45.6040% ( 340) 00:11:11.226 9711.244 - 9770.822: 48.3808% ( 343) 00:11:11.226 9770.822 - 9830.400: 50.8824% ( 309) 00:11:11.226 9830.400 - 9889.978: 53.4407% ( 316) 00:11:11.226 9889.978 - 9949.556: 55.8695% ( 300) 00:11:11.226 9949.556 - 10009.135: 58.1040% ( 276) 00:11:11.226 10009.135 - 10068.713: 60.1036% ( 247) 00:11:11.226 10068.713 - 10128.291: 61.9252% ( 225) 00:11:11.226 10128.291 - 10187.869: 63.6334% ( 211) 00:11:11.226 10187.869 - 10247.447: 65.0016% ( 169) 00:11:11.226 10247.447 - 10307.025: 66.1674% ( 144) 00:11:11.226 10307.025 - 10366.604: 67.3818% ( 150) 00:11:11.226 10366.604 - 10426.182: 68.5233% ( 141) 00:11:11.226 10426.182 - 10485.760: 69.5110% ( 122) 00:11:11.226 10485.760 - 10545.338: 70.4663% ( 118) 00:11:11.226 10545.338 - 10604.916: 71.3812% ( 113) 00:11:11.226 10604.916 - 10664.495: 72.3850% ( 124) 00:11:11.226 10664.495 - 10724.073: 73.3161% ( 115) 00:11:11.226 10724.073 - 10783.651: 74.3199% ( 124) 00:11:11.226 10783.651 - 10843.229: 75.4048% ( 134) 00:11:11.226 10843.229 - 10902.807: 76.4734% ( 132) 00:11:11.226 10902.807 - 10962.385: 77.4288% ( 118) 00:11:11.226 10962.385 - 11021.964: 78.5460% ( 138) 00:11:11.226 11021.964 - 11081.542: 79.5661% ( 126) 00:11:11.226 11081.542 - 11141.120: 80.4971% ( 115) 00:11:11.226 11141.120 - 11200.698: 81.4767% ( 121) 00:11:11.226 11200.698 - 11260.276: 82.3025% ( 102) 00:11:11.226 11260.276 - 11319.855: 82.9906% ( 85) 00:11:11.226 11319.855 - 11379.433: 83.6707% ( 84) 00:11:11.226 11379.433 - 11439.011: 84.3264% ( 81) 00:11:11.226 11439.011 - 11498.589: 84.8850% ( 69) 00:11:11.226 11498.589 - 11558.167: 85.4113% ( 65) 00:11:11.226 11558.167 - 11617.745: 85.9942% ( 72) 00:11:11.226 11617.745 - 11677.324: 86.5771% ( 72) 00:11:11.226 11677.324 - 11736.902: 87.1276% ( 68) 00:11:11.226 11736.902 - 11796.480: 87.7591% ( 78) 00:11:11.226 11796.480 - 11856.058: 88.2853% ( 65) 00:11:11.226 11856.058 - 11915.636: 88.7144% ( 53) 00:11:11.226 11915.636 - 11975.215: 89.2568% ( 67) 00:11:11.226 11975.215 - 12034.793: 89.6697% ( 51) 00:11:11.226 12034.793 - 12094.371: 90.0502% ( 47) 00:11:11.226 12094.371 - 12153.949: 90.4064% ( 44) 00:11:11.226 12153.949 - 12213.527: 90.7464% ( 42) 00:11:11.226 12213.527 - 12273.105: 91.0865% ( 42) 00:11:11.226 12273.105 - 12332.684: 91.3779% ( 36) 00:11:11.226 12332.684 - 12392.262: 91.6694% ( 36) 00:11:11.226 12392.262 - 12451.840: 91.9689% ( 37) 00:11:11.226 12451.840 - 12511.418: 92.2685% ( 37) 00:11:11.226 12511.418 - 12570.996: 92.5275% ( 32) 00:11:11.226 12570.996 - 12630.575: 92.8028% ( 34) 00:11:11.226 12630.575 - 12690.153: 93.0214% ( 27) 00:11:11.226 12690.153 - 12749.731: 93.2562% ( 29) 00:11:11.226 12749.731 - 12809.309: 93.4585% ( 25) 00:11:11.226 12809.309 - 12868.887: 93.6933% ( 29) 00:11:11.226 12868.887 - 12928.465: 93.9200% ( 28) 00:11:11.226 12928.465 - 12988.044: 94.1143% ( 24) 00:11:11.226 12988.044 - 13047.622: 94.3572% ( 30) 00:11:11.226 13047.622 - 13107.200: 94.5515% ( 24) 00:11:11.226 13107.200 - 13166.778: 94.7620% ( 26) 00:11:11.226 13166.778 - 13226.356: 94.8996% ( 17) 00:11:11.226 13226.356 - 13285.935: 95.0696% ( 21) 00:11:11.226 13285.935 - 13345.513: 95.2315% ( 20) 00:11:11.226 13345.513 - 13405.091: 95.3692% ( 17) 00:11:11.226 13405.091 - 13464.669: 95.4420% ( 9) 00:11:11.226 13464.669 - 13524.247: 95.5554% ( 14) 00:11:11.226 13524.247 - 13583.825: 95.6768% ( 15) 00:11:11.226 13583.825 - 13643.404: 95.8306% ( 19) 00:11:11.226 13643.404 - 13702.982: 95.9602% ( 16) 00:11:11.226 13702.982 - 13762.560: 96.1383% ( 22) 00:11:11.226 13762.560 - 13822.138: 96.2678% ( 16) 00:11:11.226 13822.138 - 13881.716: 96.4297% ( 20) 00:11:11.226 13881.716 - 13941.295: 96.5674% ( 17) 00:11:11.226 13941.295 - 14000.873: 96.7050% ( 17) 00:11:11.226 14000.873 - 14060.451: 96.8912% ( 23) 00:11:11.226 14060.451 - 14120.029: 97.0612% ( 21) 00:11:11.226 14120.029 - 14179.607: 97.1584% ( 12) 00:11:11.226 14179.607 - 14239.185: 97.2879% ( 16) 00:11:11.226 14239.185 - 14298.764: 97.4417% ( 19) 00:11:11.226 14298.764 - 14358.342: 97.6117% ( 21) 00:11:11.226 14358.342 - 14417.920: 97.7413% ( 16) 00:11:11.226 14417.920 - 14477.498: 97.8789% ( 17) 00:11:11.226 14477.498 - 14537.076: 97.9922% ( 14) 00:11:11.226 14537.076 - 14596.655: 98.1137% ( 15) 00:11:11.226 14596.655 - 14656.233: 98.2513% ( 17) 00:11:11.226 14656.233 - 14715.811: 98.3727% ( 15) 00:11:11.226 14715.811 - 14775.389: 98.5023% ( 16) 00:11:11.226 14775.389 - 14834.967: 98.5832% ( 10) 00:11:11.226 14834.967 - 14894.545: 98.6480% ( 8) 00:11:11.226 14894.545 - 14954.124: 98.7209% ( 9) 00:11:11.226 14954.124 - 15013.702: 98.7775% ( 7) 00:11:11.226 15013.702 - 15073.280: 98.8180% ( 5) 00:11:11.226 15073.280 - 15132.858: 98.8666% ( 6) 00:11:11.226 15132.858 - 15192.436: 98.8990% ( 4) 00:11:11.226 15192.436 - 15252.015: 98.9394% ( 5) 00:11:11.226 15252.015 - 15371.171: 98.9637% ( 3) 00:11:11.226 28597.527 - 28716.684: 98.9718% ( 1) 00:11:11.226 28716.684 - 28835.840: 98.9961% ( 3) 00:11:11.226 28835.840 - 28954.996: 99.0204% ( 3) 00:11:11.226 28954.996 - 29074.153: 99.0447% ( 3) 00:11:11.226 29074.153 - 29193.309: 99.0609% ( 2) 00:11:11.226 29193.309 - 29312.465: 99.0852% ( 3) 00:11:11.226 29312.465 - 29431.622: 99.1095% ( 3) 00:11:11.226 29431.622 - 29550.778: 99.1337% ( 3) 00:11:11.226 29550.778 - 29669.935: 99.1499% ( 2) 00:11:11.226 29669.935 - 29789.091: 99.1742% ( 3) 00:11:11.226 29789.091 - 29908.247: 99.2066% ( 4) 00:11:11.226 29908.247 - 30027.404: 99.2228% ( 2) 00:11:11.226 30027.404 - 30146.560: 99.2552% ( 4) 00:11:11.226 30146.560 - 30265.716: 99.2714% ( 2) 00:11:11.226 30265.716 - 30384.873: 99.3038% ( 4) 00:11:11.226 30384.873 - 30504.029: 99.3280% ( 3) 00:11:11.226 30504.029 - 30742.342: 99.3685% ( 5) 00:11:11.226 30742.342 - 30980.655: 99.4171% ( 6) 00:11:11.226 30980.655 - 31218.967: 99.4657% ( 6) 00:11:11.226 31218.967 - 31457.280: 99.4819% ( 2) 00:11:11.226 36461.847 - 36700.160: 99.5142% ( 4) 00:11:11.226 36700.160 - 36938.473: 99.5628% ( 6) 00:11:11.226 36938.473 - 37176.785: 99.6114% ( 6) 00:11:11.226 37176.785 - 37415.098: 99.6600% ( 6) 00:11:11.226 37415.098 - 37653.411: 99.7005% ( 5) 00:11:11.226 37653.411 - 37891.724: 99.7571% ( 7) 00:11:11.226 37891.724 - 38130.036: 99.7976% ( 5) 00:11:11.226 38130.036 - 38368.349: 99.8462% ( 6) 00:11:11.226 38368.349 - 38606.662: 99.8948% ( 6) 00:11:11.226 38606.662 - 38844.975: 99.9514% ( 7) 00:11:11.226 38844.975 - 39083.287: 99.9919% ( 5) 00:11:11.226 39083.287 - 39321.600: 100.0000% ( 1) 00:11:11.226 00:11:11.226 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.226 ============================================================================== 00:11:11.226 Range in us Cumulative IO count 00:11:11.226 8221.789 - 8281.367: 0.0405% ( 5) 00:11:11.226 8281.367 - 8340.945: 0.0729% ( 4) 00:11:11.226 8340.945 - 8400.524: 0.2267% ( 19) 00:11:11.226 8400.524 - 8460.102: 0.4453% ( 27) 00:11:11.226 8460.102 - 8519.680: 0.6720% ( 28) 00:11:11.226 8519.680 - 8579.258: 1.0687% ( 49) 00:11:11.226 8579.258 - 8638.836: 1.5706% ( 62) 00:11:11.226 8638.836 - 8698.415: 2.4369% ( 107) 00:11:11.226 8698.415 - 8757.993: 3.6350% ( 148) 00:11:11.226 8757.993 - 8817.571: 5.2218% ( 196) 00:11:11.226 8817.571 - 8877.149: 7.0434% ( 225) 00:11:11.226 8877.149 - 8936.727: 9.1564% ( 261) 00:11:11.226 8936.727 - 8996.305: 11.6014% ( 302) 00:11:11.226 8996.305 - 9055.884: 14.2811% ( 331) 00:11:11.226 9055.884 - 9115.462: 17.1389% ( 353) 00:11:11.226 9115.462 - 9175.040: 20.0615% ( 361) 00:11:11.226 9175.040 - 9234.618: 23.1460% ( 381) 00:11:11.226 9234.618 - 9294.196: 26.2791% ( 387) 00:11:11.226 9294.196 - 9353.775: 29.4122% ( 387) 00:11:11.226 9353.775 - 9413.353: 32.4482% ( 375) 00:11:11.226 9413.353 - 9472.931: 35.4517% ( 371) 00:11:11.226 9472.931 - 9532.509: 38.6010% ( 389) 00:11:11.226 9532.509 - 9592.087: 41.6775% ( 380) 00:11:11.226 9592.087 - 9651.665: 44.6648% ( 369) 00:11:11.226 9651.665 - 9711.244: 47.6603% ( 370) 00:11:11.226 9711.244 - 9770.822: 50.6558% ( 370) 00:11:11.226 9770.822 - 9830.400: 53.4165% ( 341) 00:11:11.227 9830.400 - 9889.978: 55.9181% ( 309) 00:11:11.227 9889.978 - 9949.556: 58.0797% ( 267) 00:11:11.227 9949.556 - 10009.135: 59.9822% ( 235) 00:11:11.227 10009.135 - 10068.713: 61.6337% ( 204) 00:11:11.227 10068.713 - 10128.291: 62.9210% ( 159) 00:11:11.227 10128.291 - 10187.869: 64.2163% ( 160) 00:11:11.227 10187.869 - 10247.447: 65.2364% ( 126) 00:11:11.227 10247.447 - 10307.025: 66.2565% ( 126) 00:11:11.227 10307.025 - 10366.604: 67.2523% ( 123) 00:11:11.227 10366.604 - 10426.182: 68.3047% ( 130) 00:11:11.227 10426.182 - 10485.760: 69.3491% ( 129) 00:11:11.227 10485.760 - 10545.338: 70.3530% ( 124) 00:11:11.227 10545.338 - 10604.916: 71.3245% ( 120) 00:11:11.227 10604.916 - 10664.495: 72.3365% ( 125) 00:11:11.227 10664.495 - 10724.073: 73.4051% ( 132) 00:11:11.227 10724.073 - 10783.651: 74.4171% ( 125) 00:11:11.227 10783.651 - 10843.229: 75.4534% ( 128) 00:11:11.227 10843.229 - 10902.807: 76.4977% ( 129) 00:11:11.227 10902.807 - 10962.385: 77.5664% ( 132) 00:11:11.227 10962.385 - 11021.964: 78.6512% ( 134) 00:11:11.227 11021.964 - 11081.542: 79.6389% ( 122) 00:11:11.227 11081.542 - 11141.120: 80.5619% ( 114) 00:11:11.227 11141.120 - 11200.698: 81.3957% ( 103) 00:11:11.227 11200.698 - 11260.276: 82.1567% ( 94) 00:11:11.227 11260.276 - 11319.855: 82.8449% ( 85) 00:11:11.227 11319.855 - 11379.433: 83.4602% ( 76) 00:11:11.227 11379.433 - 11439.011: 84.0916% ( 78) 00:11:11.227 11439.011 - 11498.589: 84.6988% ( 75) 00:11:11.227 11498.589 - 11558.167: 85.3870% ( 85) 00:11:11.227 11558.167 - 11617.745: 86.0508% ( 82) 00:11:11.227 11617.745 - 11677.324: 86.6499% ( 74) 00:11:11.227 11677.324 - 11736.902: 87.2571% ( 75) 00:11:11.227 11736.902 - 11796.480: 87.8400% ( 72) 00:11:11.227 11796.480 - 11856.058: 88.3177% ( 59) 00:11:11.227 11856.058 - 11915.636: 88.7225% ( 50) 00:11:11.227 11915.636 - 11975.215: 89.0787% ( 44) 00:11:11.227 11975.215 - 12034.793: 89.4511% ( 46) 00:11:11.227 12034.793 - 12094.371: 89.7992% ( 43) 00:11:11.227 12094.371 - 12153.949: 90.1473% ( 43) 00:11:11.227 12153.949 - 12213.527: 90.5117% ( 45) 00:11:11.227 12213.527 - 12273.105: 90.9084% ( 49) 00:11:11.227 12273.105 - 12332.684: 91.2808% ( 46) 00:11:11.227 12332.684 - 12392.262: 91.6208% ( 42) 00:11:11.227 12392.262 - 12451.840: 91.9608% ( 42) 00:11:11.227 12451.840 - 12511.418: 92.2847% ( 40) 00:11:11.227 12511.418 - 12570.996: 92.6247% ( 42) 00:11:11.227 12570.996 - 12630.575: 92.9485% ( 40) 00:11:11.227 12630.575 - 12690.153: 93.2157% ( 33) 00:11:11.227 12690.153 - 12749.731: 93.4585% ( 30) 00:11:11.227 12749.731 - 12809.309: 93.6852% ( 28) 00:11:11.227 12809.309 - 12868.887: 93.9119% ( 28) 00:11:11.227 12868.887 - 12928.465: 94.0981% ( 23) 00:11:11.227 12928.465 - 12988.044: 94.2762% ( 22) 00:11:11.227 12988.044 - 13047.622: 94.4624% ( 23) 00:11:11.227 13047.622 - 13107.200: 94.6567% ( 24) 00:11:11.227 13107.200 - 13166.778: 94.8187% ( 20) 00:11:11.227 13166.778 - 13226.356: 94.9968% ( 22) 00:11:11.227 13226.356 - 13285.935: 95.1587% ( 20) 00:11:11.227 13285.935 - 13345.513: 95.2801% ( 15) 00:11:11.227 13345.513 - 13405.091: 95.3692% ( 11) 00:11:11.227 13405.091 - 13464.669: 95.4420% ( 9) 00:11:11.227 13464.669 - 13524.247: 95.5392% ( 12) 00:11:11.227 13524.247 - 13583.825: 95.6201% ( 10) 00:11:11.227 13583.825 - 13643.404: 95.6849% ( 8) 00:11:11.227 13643.404 - 13702.982: 95.7983% ( 14) 00:11:11.227 13702.982 - 13762.560: 95.9278% ( 16) 00:11:11.227 13762.560 - 13822.138: 96.0816% ( 19) 00:11:11.227 13822.138 - 13881.716: 96.2192% ( 17) 00:11:11.227 13881.716 - 13941.295: 96.3892% ( 21) 00:11:11.227 13941.295 - 14000.873: 96.5269% ( 17) 00:11:11.227 14000.873 - 14060.451: 96.7131% ( 23) 00:11:11.227 14060.451 - 14120.029: 96.8750% ( 20) 00:11:11.227 14120.029 - 14179.607: 97.0531% ( 22) 00:11:11.227 14179.607 - 14239.185: 97.2312% ( 22) 00:11:11.227 14239.185 - 14298.764: 97.4012% ( 21) 00:11:11.227 14298.764 - 14358.342: 97.5631% ( 20) 00:11:11.227 14358.342 - 14417.920: 97.7008% ( 17) 00:11:11.227 14417.920 - 14477.498: 97.8303% ( 16) 00:11:11.227 14477.498 - 14537.076: 97.9922% ( 20) 00:11:11.227 14537.076 - 14596.655: 98.1380% ( 18) 00:11:11.227 14596.655 - 14656.233: 98.2999% ( 20) 00:11:11.227 14656.233 - 14715.811: 98.4456% ( 18) 00:11:11.227 14715.811 - 14775.389: 98.5670% ( 15) 00:11:11.227 14775.389 - 14834.967: 98.6642% ( 12) 00:11:11.227 14834.967 - 14894.545: 98.7370% ( 9) 00:11:11.227 14894.545 - 14954.124: 98.8099% ( 9) 00:11:11.227 14954.124 - 15013.702: 98.8828% ( 9) 00:11:11.227 15013.702 - 15073.280: 98.9152% ( 4) 00:11:11.227 15073.280 - 15132.858: 98.9394% ( 3) 00:11:11.227 15132.858 - 15192.436: 98.9556% ( 2) 00:11:11.227 15192.436 - 15252.015: 98.9637% ( 1) 00:11:11.227 26810.182 - 26929.338: 98.9961% ( 4) 00:11:11.227 26929.338 - 27048.495: 99.0204% ( 3) 00:11:11.227 27048.495 - 27167.651: 99.0366% ( 2) 00:11:11.227 27167.651 - 27286.807: 99.0609% ( 3) 00:11:11.227 27286.807 - 27405.964: 99.0852% ( 3) 00:11:11.227 27405.964 - 27525.120: 99.1095% ( 3) 00:11:11.227 27525.120 - 27644.276: 99.1337% ( 3) 00:11:11.227 27644.276 - 27763.433: 99.1580% ( 3) 00:11:11.227 27763.433 - 27882.589: 99.1823% ( 3) 00:11:11.227 27882.589 - 28001.745: 99.2066% ( 3) 00:11:11.227 28001.745 - 28120.902: 99.2390% ( 4) 00:11:11.227 28120.902 - 28240.058: 99.2633% ( 3) 00:11:11.227 28240.058 - 28359.215: 99.2876% ( 3) 00:11:11.227 28359.215 - 28478.371: 99.3038% ( 2) 00:11:11.227 28478.371 - 28597.527: 99.3280% ( 3) 00:11:11.227 28597.527 - 28716.684: 99.3604% ( 4) 00:11:11.227 28716.684 - 28835.840: 99.3847% ( 3) 00:11:11.227 28835.840 - 28954.996: 99.4090% ( 3) 00:11:11.227 28954.996 - 29074.153: 99.4333% ( 3) 00:11:11.227 29074.153 - 29193.309: 99.4657% ( 4) 00:11:11.227 29193.309 - 29312.465: 99.4819% ( 2) 00:11:11.227 34078.720 - 34317.033: 99.5223% ( 5) 00:11:11.227 34317.033 - 34555.345: 99.5628% ( 5) 00:11:11.227 34555.345 - 34793.658: 99.6114% ( 6) 00:11:11.227 34793.658 - 35031.971: 99.6600% ( 6) 00:11:11.227 35031.971 - 35270.284: 99.7166% ( 7) 00:11:11.227 35270.284 - 35508.596: 99.7652% ( 6) 00:11:11.227 35508.596 - 35746.909: 99.8219% ( 7) 00:11:11.227 35746.909 - 35985.222: 99.8624% ( 5) 00:11:11.227 35985.222 - 36223.535: 99.9190% ( 7) 00:11:11.227 36223.535 - 36461.847: 99.9676% ( 6) 00:11:11.227 36461.847 - 36700.160: 100.0000% ( 4) 00:11:11.227 00:11:11.227 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.227 ============================================================================== 00:11:11.227 Range in us Cumulative IO count 00:11:11.227 8340.945 - 8400.524: 0.0891% ( 11) 00:11:11.227 8400.524 - 8460.102: 0.2834% ( 24) 00:11:11.227 8460.102 - 8519.680: 0.5829% ( 37) 00:11:11.227 8519.680 - 8579.258: 0.9958% ( 51) 00:11:11.227 8579.258 - 8638.836: 1.5058% ( 63) 00:11:11.227 8638.836 - 8698.415: 2.2830% ( 96) 00:11:11.227 8698.415 - 8757.993: 3.3598% ( 133) 00:11:11.227 8757.993 - 8817.571: 4.9870% ( 201) 00:11:11.227 8817.571 - 8877.149: 6.9948% ( 248) 00:11:11.227 8877.149 - 8936.727: 9.1564% ( 267) 00:11:11.227 8936.727 - 8996.305: 11.5933% ( 301) 00:11:11.227 8996.305 - 9055.884: 14.1839% ( 320) 00:11:11.227 9055.884 - 9115.462: 17.0499% ( 354) 00:11:11.227 9115.462 - 9175.040: 19.8753% ( 349) 00:11:11.227 9175.040 - 9234.618: 23.0327% ( 390) 00:11:11.227 9234.618 - 9294.196: 26.1496% ( 385) 00:11:11.227 9294.196 - 9353.775: 29.3232% ( 392) 00:11:11.227 9353.775 - 9413.353: 32.4077% ( 381) 00:11:11.227 9413.353 - 9472.931: 35.3789% ( 367) 00:11:11.227 9472.931 - 9532.509: 38.4796% ( 383) 00:11:11.227 9532.509 - 9592.087: 41.5236% ( 376) 00:11:11.227 9592.087 - 9651.665: 44.5920% ( 379) 00:11:11.227 9651.665 - 9711.244: 47.7008% ( 384) 00:11:11.227 9711.244 - 9770.822: 50.6720% ( 367) 00:11:11.227 9770.822 - 9830.400: 53.4731% ( 346) 00:11:11.227 9830.400 - 9889.978: 55.9181% ( 302) 00:11:11.227 9889.978 - 9949.556: 58.0392% ( 262) 00:11:11.227 9949.556 - 10009.135: 59.7393% ( 210) 00:11:11.227 10009.135 - 10068.713: 61.2775% ( 190) 00:11:11.227 10068.713 - 10128.291: 62.6943% ( 175) 00:11:11.227 10128.291 - 10187.869: 63.9573% ( 156) 00:11:11.227 10187.869 - 10247.447: 64.9935% ( 128) 00:11:11.227 10247.447 - 10307.025: 66.0703% ( 133) 00:11:11.227 10307.025 - 10366.604: 67.0903% ( 126) 00:11:11.227 10366.604 - 10426.182: 68.0538% ( 119) 00:11:11.227 10426.182 - 10485.760: 69.0819% ( 127) 00:11:11.227 10485.760 - 10545.338: 70.1182% ( 128) 00:11:11.227 10545.338 - 10604.916: 71.0816% ( 119) 00:11:11.227 10604.916 - 10664.495: 72.0612% ( 121) 00:11:11.227 10664.495 - 10724.073: 73.0246% ( 119) 00:11:11.227 10724.073 - 10783.651: 74.0528% ( 127) 00:11:11.228 10783.651 - 10843.229: 75.1214% ( 132) 00:11:11.228 10843.229 - 10902.807: 76.2953% ( 145) 00:11:11.228 10902.807 - 10962.385: 77.4692% ( 145) 00:11:11.228 10962.385 - 11021.964: 78.6431% ( 145) 00:11:11.228 11021.964 - 11081.542: 79.8332% ( 147) 00:11:11.228 11081.542 - 11141.120: 80.8533% ( 126) 00:11:11.228 11141.120 - 11200.698: 81.7519% ( 111) 00:11:11.228 11200.698 - 11260.276: 82.5696% ( 101) 00:11:11.228 11260.276 - 11319.855: 83.3063% ( 91) 00:11:11.228 11319.855 - 11379.433: 83.9216% ( 76) 00:11:11.228 11379.433 - 11439.011: 84.5855% ( 82) 00:11:11.228 11439.011 - 11498.589: 85.2494% ( 82) 00:11:11.228 11498.589 - 11558.167: 85.8970% ( 80) 00:11:11.228 11558.167 - 11617.745: 86.5366% ( 79) 00:11:11.228 11617.745 - 11677.324: 87.1843% ( 80) 00:11:11.228 11677.324 - 11736.902: 87.8157% ( 78) 00:11:11.228 11736.902 - 11796.480: 88.4472% ( 78) 00:11:11.228 11796.480 - 11856.058: 88.9573% ( 63) 00:11:11.228 11856.058 - 11915.636: 89.4349% ( 59) 00:11:11.228 11915.636 - 11975.215: 89.8721% ( 54) 00:11:11.228 11975.215 - 12034.793: 90.1878% ( 39) 00:11:11.228 12034.793 - 12094.371: 90.4712% ( 35) 00:11:11.228 12094.371 - 12153.949: 90.6979% ( 28) 00:11:11.228 12153.949 - 12213.527: 90.9245% ( 28) 00:11:11.228 12213.527 - 12273.105: 91.1269% ( 25) 00:11:11.228 12273.105 - 12332.684: 91.3617% ( 29) 00:11:11.228 12332.684 - 12392.262: 91.6127% ( 31) 00:11:11.228 12392.262 - 12451.840: 91.8718% ( 32) 00:11:11.228 12451.840 - 12511.418: 92.1227% ( 31) 00:11:11.228 12511.418 - 12570.996: 92.3575% ( 29) 00:11:11.228 12570.996 - 12630.575: 92.5923% ( 29) 00:11:11.228 12630.575 - 12690.153: 92.8756% ( 35) 00:11:11.228 12690.153 - 12749.731: 93.1590% ( 35) 00:11:11.228 12749.731 - 12809.309: 93.4343% ( 34) 00:11:11.228 12809.309 - 12868.887: 93.7095% ( 34) 00:11:11.228 12868.887 - 12928.465: 93.9686% ( 32) 00:11:11.228 12928.465 - 12988.044: 94.1791% ( 26) 00:11:11.228 12988.044 - 13047.622: 94.3896% ( 26) 00:11:11.228 13047.622 - 13107.200: 94.5920% ( 25) 00:11:11.228 13107.200 - 13166.778: 94.7620% ( 21) 00:11:11.228 13166.778 - 13226.356: 94.9320% ( 21) 00:11:11.228 13226.356 - 13285.935: 95.1101% ( 22) 00:11:11.228 13285.935 - 13345.513: 95.2963% ( 23) 00:11:11.228 13345.513 - 13405.091: 95.4420% ( 18) 00:11:11.228 13405.091 - 13464.669: 95.5716% ( 16) 00:11:11.228 13464.669 - 13524.247: 95.6849% ( 14) 00:11:11.228 13524.247 - 13583.825: 95.8306% ( 18) 00:11:11.228 13583.825 - 13643.404: 95.9926% ( 20) 00:11:11.228 13643.404 - 13702.982: 96.1707% ( 22) 00:11:11.228 13702.982 - 13762.560: 96.3245% ( 19) 00:11:11.228 13762.560 - 13822.138: 96.4945% ( 21) 00:11:11.228 13822.138 - 13881.716: 96.6402% ( 18) 00:11:11.228 13881.716 - 13941.295: 96.7778% ( 17) 00:11:11.228 13941.295 - 14000.873: 96.9236% ( 18) 00:11:11.228 14000.873 - 14060.451: 97.0612% ( 17) 00:11:11.228 14060.451 - 14120.029: 97.2069% ( 18) 00:11:11.228 14120.029 - 14179.607: 97.3850% ( 22) 00:11:11.228 14179.607 - 14239.185: 97.5631% ( 22) 00:11:11.228 14239.185 - 14298.764: 97.7251% ( 20) 00:11:11.228 14298.764 - 14358.342: 97.8789% ( 19) 00:11:11.228 14358.342 - 14417.920: 98.0408% ( 20) 00:11:11.228 14417.920 - 14477.498: 98.1784% ( 17) 00:11:11.228 14477.498 - 14537.076: 98.2999% ( 15) 00:11:11.228 14537.076 - 14596.655: 98.4294% ( 16) 00:11:11.228 14596.655 - 14656.233: 98.5347% ( 13) 00:11:11.228 14656.233 - 14715.811: 98.6237% ( 11) 00:11:11.228 14715.811 - 14775.389: 98.6966% ( 9) 00:11:11.228 14775.389 - 14834.967: 98.7613% ( 8) 00:11:11.228 14834.967 - 14894.545: 98.8018% ( 5) 00:11:11.228 14894.545 - 14954.124: 98.8261% ( 3) 00:11:11.228 14954.124 - 15013.702: 98.8585% ( 4) 00:11:11.228 15013.702 - 15073.280: 98.8828% ( 3) 00:11:11.228 15073.280 - 15132.858: 98.9071% ( 3) 00:11:11.228 15132.858 - 15192.436: 98.9394% ( 4) 00:11:11.228 15192.436 - 15252.015: 98.9637% ( 3) 00:11:11.228 24665.367 - 24784.524: 98.9718% ( 1) 00:11:11.228 24784.524 - 24903.680: 98.9961% ( 3) 00:11:11.228 24903.680 - 25022.836: 99.0204% ( 3) 00:11:11.228 25022.836 - 25141.993: 99.0447% ( 3) 00:11:11.228 25141.993 - 25261.149: 99.0690% ( 3) 00:11:11.228 25261.149 - 25380.305: 99.0933% ( 3) 00:11:11.228 25380.305 - 25499.462: 99.1176% ( 3) 00:11:11.228 25499.462 - 25618.618: 99.1418% ( 3) 00:11:11.228 25618.618 - 25737.775: 99.1661% ( 3) 00:11:11.228 25737.775 - 25856.931: 99.1904% ( 3) 00:11:11.228 25856.931 - 25976.087: 99.2147% ( 3) 00:11:11.228 25976.087 - 26095.244: 99.2390% ( 3) 00:11:11.228 26095.244 - 26214.400: 99.2633% ( 3) 00:11:11.228 26214.400 - 26333.556: 99.2876% ( 3) 00:11:11.228 26333.556 - 26452.713: 99.3119% ( 3) 00:11:11.228 26452.713 - 26571.869: 99.3442% ( 4) 00:11:11.228 26571.869 - 26691.025: 99.3604% ( 2) 00:11:11.228 26691.025 - 26810.182: 99.3928% ( 4) 00:11:11.228 26810.182 - 26929.338: 99.4171% ( 3) 00:11:11.228 26929.338 - 27048.495: 99.4414% ( 3) 00:11:11.228 27048.495 - 27167.651: 99.4657% ( 3) 00:11:11.228 27167.651 - 27286.807: 99.4819% ( 2) 00:11:11.228 31933.905 - 32172.218: 99.4981% ( 2) 00:11:11.228 32172.218 - 32410.531: 99.5466% ( 6) 00:11:11.228 32410.531 - 32648.844: 99.5952% ( 6) 00:11:11.228 32648.844 - 32887.156: 99.6357% ( 5) 00:11:11.228 32887.156 - 33125.469: 99.6843% ( 6) 00:11:11.228 33125.469 - 33363.782: 99.7328% ( 6) 00:11:11.228 33363.782 - 33602.095: 99.7814% ( 6) 00:11:11.228 33602.095 - 33840.407: 99.8300% ( 6) 00:11:11.228 33840.407 - 34078.720: 99.8786% ( 6) 00:11:11.228 34078.720 - 34317.033: 99.9352% ( 7) 00:11:11.228 34317.033 - 34555.345: 99.9838% ( 6) 00:11:11.228 34555.345 - 34793.658: 100.0000% ( 2) 00:11:11.228 00:11:11.228 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.228 ============================================================================== 00:11:11.228 Range in us Cumulative IO count 00:11:11.228 8340.945 - 8400.524: 0.0729% ( 9) 00:11:11.228 8400.524 - 8460.102: 0.2753% ( 25) 00:11:11.228 8460.102 - 8519.680: 0.5586% ( 35) 00:11:11.228 8519.680 - 8579.258: 0.9634% ( 50) 00:11:11.228 8579.258 - 8638.836: 1.5382% ( 71) 00:11:11.228 8638.836 - 8698.415: 2.2992% ( 94) 00:11:11.228 8698.415 - 8757.993: 3.4326% ( 140) 00:11:11.228 8757.993 - 8817.571: 5.0275% ( 197) 00:11:11.228 8817.571 - 8877.149: 6.7843% ( 217) 00:11:11.228 8877.149 - 8936.727: 8.9945% ( 273) 00:11:11.228 8936.727 - 8996.305: 11.2856% ( 283) 00:11:11.228 8996.305 - 9055.884: 13.9573% ( 330) 00:11:11.228 9055.884 - 9115.462: 16.8232% ( 354) 00:11:11.228 9115.462 - 9175.040: 19.6810% ( 353) 00:11:11.228 9175.040 - 9234.618: 22.6927% ( 372) 00:11:11.228 9234.618 - 9294.196: 25.7934% ( 383) 00:11:11.228 9294.196 - 9353.775: 28.8455% ( 377) 00:11:11.228 9353.775 - 9413.353: 31.8977% ( 377) 00:11:11.228 9413.353 - 9472.931: 34.9174% ( 373) 00:11:11.228 9472.931 - 9532.509: 37.9938% ( 380) 00:11:11.228 9532.509 - 9592.087: 40.9569% ( 366) 00:11:11.228 9592.087 - 9651.665: 44.0253% ( 379) 00:11:11.228 9651.665 - 9711.244: 46.9722% ( 364) 00:11:11.228 9711.244 - 9770.822: 49.8786% ( 359) 00:11:11.228 9770.822 - 9830.400: 52.5907% ( 335) 00:11:11.228 9830.400 - 9889.978: 55.0194% ( 300) 00:11:11.228 9889.978 - 9949.556: 57.1729% ( 266) 00:11:11.228 9949.556 - 10009.135: 59.0593% ( 233) 00:11:11.228 10009.135 - 10068.713: 60.6946% ( 202) 00:11:11.228 10068.713 - 10128.291: 62.0142% ( 163) 00:11:11.228 10128.291 - 10187.869: 63.3258% ( 162) 00:11:11.228 10187.869 - 10247.447: 64.4511% ( 139) 00:11:11.228 10247.447 - 10307.025: 65.5683% ( 138) 00:11:11.228 10307.025 - 10366.604: 66.6856% ( 138) 00:11:11.228 10366.604 - 10426.182: 67.8756% ( 147) 00:11:11.228 10426.182 - 10485.760: 68.9605% ( 134) 00:11:11.228 10485.760 - 10545.338: 70.0453% ( 134) 00:11:11.228 10545.338 - 10604.916: 71.1221% ( 133) 00:11:11.228 10604.916 - 10664.495: 72.1745% ( 130) 00:11:11.228 10664.495 - 10724.073: 73.2837% ( 137) 00:11:11.228 10724.073 - 10783.651: 74.4495% ( 144) 00:11:11.228 10783.651 - 10843.229: 75.6072% ( 143) 00:11:11.228 10843.229 - 10902.807: 76.7730% ( 144) 00:11:11.228 10902.807 - 10962.385: 77.9712% ( 148) 00:11:11.228 10962.385 - 11021.964: 79.1289% ( 143) 00:11:11.228 11021.964 - 11081.542: 80.2137% ( 134) 00:11:11.228 11081.542 - 11141.120: 81.2338% ( 126) 00:11:11.228 11141.120 - 11200.698: 82.1244% ( 110) 00:11:11.228 11200.698 - 11260.276: 82.9501% ( 102) 00:11:11.228 11260.276 - 11319.855: 83.6788% ( 90) 00:11:11.228 11319.855 - 11379.433: 84.3507% ( 83) 00:11:11.228 11379.433 - 11439.011: 85.0065% ( 81) 00:11:11.228 11439.011 - 11498.589: 85.6218% ( 76) 00:11:11.228 11498.589 - 11558.167: 86.1723% ( 68) 00:11:11.228 11558.167 - 11617.745: 86.7309% ( 69) 00:11:11.228 11617.745 - 11677.324: 87.2814% ( 68) 00:11:11.228 11677.324 - 11736.902: 87.8400% ( 69) 00:11:11.228 11736.902 - 11796.480: 88.3339% ( 61) 00:11:11.228 11796.480 - 11856.058: 88.7791% ( 55) 00:11:11.228 11856.058 - 11915.636: 89.2244% ( 55) 00:11:11.228 11915.636 - 11975.215: 89.6211% ( 49) 00:11:11.228 11975.215 - 12034.793: 89.9935% ( 46) 00:11:11.228 12034.793 - 12094.371: 90.2850% ( 36) 00:11:11.228 12094.371 - 12153.949: 90.5198% ( 29) 00:11:11.228 12153.949 - 12213.527: 90.7545% ( 29) 00:11:11.228 12213.527 - 12273.105: 90.9488% ( 24) 00:11:11.228 12273.105 - 12332.684: 91.1755% ( 28) 00:11:11.228 12332.684 - 12392.262: 91.4589% ( 35) 00:11:11.228 12392.262 - 12451.840: 91.7098% ( 31) 00:11:11.228 12451.840 - 12511.418: 91.9689% ( 32) 00:11:11.228 12511.418 - 12570.996: 92.2280% ( 32) 00:11:11.228 12570.996 - 12630.575: 92.5032% ( 34) 00:11:11.228 12630.575 - 12690.153: 92.7542% ( 31) 00:11:11.228 12690.153 - 12749.731: 93.0052% ( 31) 00:11:11.228 12749.731 - 12809.309: 93.2804% ( 34) 00:11:11.228 12809.309 - 12868.887: 93.5152% ( 29) 00:11:11.228 12868.887 - 12928.465: 93.7419% ( 28) 00:11:11.228 12928.465 - 12988.044: 93.9929% ( 31) 00:11:11.228 12988.044 - 13047.622: 94.2358% ( 30) 00:11:11.228 13047.622 - 13107.200: 94.4705% ( 29) 00:11:11.228 13107.200 - 13166.778: 94.7134% ( 30) 00:11:11.228 13166.778 - 13226.356: 94.9320% ( 27) 00:11:11.228 13226.356 - 13285.935: 95.1668% ( 29) 00:11:11.229 13285.935 - 13345.513: 95.4339% ( 33) 00:11:11.229 13345.513 - 13405.091: 95.6768% ( 30) 00:11:11.229 13405.091 - 13464.669: 95.8630% ( 23) 00:11:11.229 13464.669 - 13524.247: 96.0249% ( 20) 00:11:11.229 13524.247 - 13583.825: 96.1788% ( 19) 00:11:11.229 13583.825 - 13643.404: 96.3407% ( 20) 00:11:11.229 13643.404 - 13702.982: 96.5026% ( 20) 00:11:11.229 13702.982 - 13762.560: 96.6888% ( 23) 00:11:11.229 13762.560 - 13822.138: 96.8831% ( 24) 00:11:11.229 13822.138 - 13881.716: 97.0936% ( 26) 00:11:11.229 13881.716 - 13941.295: 97.3122% ( 27) 00:11:11.229 13941.295 - 14000.873: 97.5146% ( 25) 00:11:11.229 14000.873 - 14060.451: 97.7089% ( 24) 00:11:11.229 14060.451 - 14120.029: 97.9194% ( 26) 00:11:11.229 14120.029 - 14179.607: 98.0651% ( 18) 00:11:11.229 14179.607 - 14239.185: 98.1865% ( 15) 00:11:11.229 14239.185 - 14298.764: 98.2999% ( 14) 00:11:11.229 14298.764 - 14358.342: 98.4132% ( 14) 00:11:11.229 14358.342 - 14417.920: 98.4861% ( 9) 00:11:11.229 14417.920 - 14477.498: 98.5589% ( 9) 00:11:11.229 14477.498 - 14537.076: 98.5994% ( 5) 00:11:11.229 14537.076 - 14596.655: 98.6399% ( 5) 00:11:11.229 14596.655 - 14656.233: 98.6885% ( 6) 00:11:11.229 14656.233 - 14715.811: 98.7451% ( 7) 00:11:11.229 14715.811 - 14775.389: 98.7937% ( 6) 00:11:11.229 14775.389 - 14834.967: 98.8423% ( 6) 00:11:11.229 14834.967 - 14894.545: 98.8909% ( 6) 00:11:11.229 14894.545 - 14954.124: 98.9394% ( 6) 00:11:11.229 14954.124 - 15013.702: 98.9637% ( 3) 00:11:11.229 22163.084 - 22282.240: 98.9880% ( 3) 00:11:11.229 22282.240 - 22401.396: 99.0204% ( 4) 00:11:11.229 22401.396 - 22520.553: 99.0366% ( 2) 00:11:11.229 22520.553 - 22639.709: 99.0609% ( 3) 00:11:11.229 22639.709 - 22758.865: 99.0852% ( 3) 00:11:11.229 22758.865 - 22878.022: 99.1095% ( 3) 00:11:11.229 22878.022 - 22997.178: 99.1418% ( 4) 00:11:11.229 22997.178 - 23116.335: 99.1661% ( 3) 00:11:11.229 23116.335 - 23235.491: 99.1904% ( 3) 00:11:11.229 23235.491 - 23354.647: 99.2147% ( 3) 00:11:11.229 23354.647 - 23473.804: 99.2309% ( 2) 00:11:11.229 23473.804 - 23592.960: 99.2552% ( 3) 00:11:11.229 23592.960 - 23712.116: 99.2876% ( 4) 00:11:11.229 23712.116 - 23831.273: 99.3119% ( 3) 00:11:11.229 23831.273 - 23950.429: 99.3361% ( 3) 00:11:11.229 23950.429 - 24069.585: 99.3604% ( 3) 00:11:11.229 24069.585 - 24188.742: 99.3847% ( 3) 00:11:11.229 24188.742 - 24307.898: 99.4090% ( 3) 00:11:11.229 24307.898 - 24427.055: 99.4414% ( 4) 00:11:11.229 24427.055 - 24546.211: 99.4657% ( 3) 00:11:11.229 24546.211 - 24665.367: 99.4819% ( 2) 00:11:11.229 29431.622 - 29550.778: 99.4981% ( 2) 00:11:11.229 29550.778 - 29669.935: 99.5142% ( 2) 00:11:11.229 29669.935 - 29789.091: 99.5466% ( 4) 00:11:11.229 29789.091 - 29908.247: 99.5709% ( 3) 00:11:11.229 29908.247 - 30027.404: 99.5952% ( 3) 00:11:11.229 30027.404 - 30146.560: 99.6195% ( 3) 00:11:11.229 30146.560 - 30265.716: 99.6357% ( 2) 00:11:11.229 30265.716 - 30384.873: 99.6600% ( 3) 00:11:11.229 30384.873 - 30504.029: 99.6843% ( 3) 00:11:11.229 30504.029 - 30742.342: 99.7328% ( 6) 00:11:11.229 30742.342 - 30980.655: 99.7895% ( 7) 00:11:11.229 30980.655 - 31218.967: 99.8381% ( 6) 00:11:11.229 31218.967 - 31457.280: 99.8948% ( 7) 00:11:11.229 31457.280 - 31695.593: 99.9433% ( 6) 00:11:11.229 31695.593 - 31933.905: 99.9919% ( 6) 00:11:11.229 31933.905 - 32172.218: 100.0000% ( 1) 00:11:11.229 00:11:11.229 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.229 ============================================================================== 00:11:11.229 Range in us Cumulative IO count 00:11:11.229 8340.945 - 8400.524: 0.1052% ( 13) 00:11:11.229 8400.524 - 8460.102: 0.2834% ( 22) 00:11:11.229 8460.102 - 8519.680: 0.5667% ( 35) 00:11:11.229 8519.680 - 8579.258: 0.9148% ( 43) 00:11:11.229 8579.258 - 8638.836: 1.5139% ( 74) 00:11:11.229 8638.836 - 8698.415: 2.2587% ( 92) 00:11:11.229 8698.415 - 8757.993: 3.4488% ( 147) 00:11:11.229 8757.993 - 8817.571: 5.0680% ( 200) 00:11:11.229 8817.571 - 8877.149: 6.9139% ( 228) 00:11:11.229 8877.149 - 8936.727: 9.0026% ( 258) 00:11:11.229 8936.727 - 8996.305: 11.3990% ( 296) 00:11:11.229 8996.305 - 9055.884: 13.9896% ( 320) 00:11:11.229 9055.884 - 9115.462: 16.7017% ( 335) 00:11:11.229 9115.462 - 9175.040: 19.6244% ( 361) 00:11:11.229 9175.040 - 9234.618: 22.5793% ( 365) 00:11:11.229 9234.618 - 9294.196: 25.6396% ( 378) 00:11:11.229 9294.196 - 9353.775: 28.6998% ( 378) 00:11:11.229 9353.775 - 9413.353: 31.8167% ( 385) 00:11:11.229 9413.353 - 9472.931: 34.8041% ( 369) 00:11:11.229 9472.931 - 9532.509: 37.8562% ( 377) 00:11:11.229 9532.509 - 9592.087: 40.9974% ( 388) 00:11:11.229 9592.087 - 9651.665: 43.9848% ( 369) 00:11:11.229 9651.665 - 9711.244: 47.0531% ( 379) 00:11:11.229 9711.244 - 9770.822: 49.8624% ( 347) 00:11:11.229 9770.822 - 9830.400: 52.5745% ( 335) 00:11:11.229 9830.400 - 9889.978: 55.0761% ( 309) 00:11:11.229 9889.978 - 9949.556: 57.2296% ( 266) 00:11:11.229 9949.556 - 10009.135: 59.1564% ( 238) 00:11:11.229 10009.135 - 10068.713: 60.8161% ( 205) 00:11:11.229 10068.713 - 10128.291: 62.2085% ( 172) 00:11:11.229 10128.291 - 10187.869: 63.5525% ( 166) 00:11:11.229 10187.869 - 10247.447: 64.8235% ( 157) 00:11:11.229 10247.447 - 10307.025: 65.9084% ( 134) 00:11:11.229 10307.025 - 10366.604: 67.1389% ( 152) 00:11:11.229 10366.604 - 10426.182: 68.2400% ( 136) 00:11:11.229 10426.182 - 10485.760: 69.3491% ( 137) 00:11:11.229 10485.760 - 10545.338: 70.4744% ( 139) 00:11:11.229 10545.338 - 10604.916: 71.5674% ( 135) 00:11:11.229 10604.916 - 10664.495: 72.6846% ( 138) 00:11:11.229 10664.495 - 10724.073: 73.7694% ( 134) 00:11:11.229 10724.073 - 10783.651: 74.8786% ( 137) 00:11:11.229 10783.651 - 10843.229: 76.0201% ( 141) 00:11:11.229 10843.229 - 10902.807: 77.1373% ( 138) 00:11:11.229 10902.807 - 10962.385: 78.3517% ( 150) 00:11:11.229 10962.385 - 11021.964: 79.5661% ( 150) 00:11:11.229 11021.964 - 11081.542: 80.6104% ( 129) 00:11:11.229 11081.542 - 11141.120: 81.5334% ( 114) 00:11:11.229 11141.120 - 11200.698: 82.2620% ( 90) 00:11:11.229 11200.698 - 11260.276: 83.0068% ( 92) 00:11:11.229 11260.276 - 11319.855: 83.7111% ( 87) 00:11:11.229 11319.855 - 11379.433: 84.3021% ( 73) 00:11:11.229 11379.433 - 11439.011: 84.9093% ( 75) 00:11:11.229 11439.011 - 11498.589: 85.5084% ( 74) 00:11:11.229 11498.589 - 11558.167: 86.1318% ( 77) 00:11:11.229 11558.167 - 11617.745: 86.7066% ( 71) 00:11:11.229 11617.745 - 11677.324: 87.2652% ( 69) 00:11:11.229 11677.324 - 11736.902: 87.8319% ( 70) 00:11:11.229 11736.902 - 11796.480: 88.3096% ( 59) 00:11:11.229 11796.480 - 11856.058: 88.7225% ( 51) 00:11:11.229 11856.058 - 11915.636: 89.1111% ( 48) 00:11:11.229 11915.636 - 11975.215: 89.5321% ( 52) 00:11:11.229 11975.215 - 12034.793: 89.8802% ( 43) 00:11:11.229 12034.793 - 12094.371: 90.1797% ( 37) 00:11:11.229 12094.371 - 12153.949: 90.4307% ( 31) 00:11:11.229 12153.949 - 12213.527: 90.6412% ( 26) 00:11:11.229 12213.527 - 12273.105: 90.8841% ( 30) 00:11:11.229 12273.105 - 12332.684: 91.1350% ( 31) 00:11:11.229 12332.684 - 12392.262: 91.3779% ( 30) 00:11:11.229 12392.262 - 12451.840: 91.6208% ( 30) 00:11:11.229 12451.840 - 12511.418: 91.8880% ( 33) 00:11:11.229 12511.418 - 12570.996: 92.1551% ( 33) 00:11:11.229 12570.996 - 12630.575: 92.4304% ( 34) 00:11:11.229 12630.575 - 12690.153: 92.6733% ( 30) 00:11:11.229 12690.153 - 12749.731: 92.9485% ( 34) 00:11:11.229 12749.731 - 12809.309: 93.2238% ( 34) 00:11:11.229 12809.309 - 12868.887: 93.4747% ( 31) 00:11:11.229 12868.887 - 12928.465: 93.7014% ( 28) 00:11:11.229 12928.465 - 12988.044: 93.9200% ( 27) 00:11:11.229 12988.044 - 13047.622: 94.1386% ( 27) 00:11:11.229 13047.622 - 13107.200: 94.3248% ( 23) 00:11:11.229 13107.200 - 13166.778: 94.5434% ( 27) 00:11:11.229 13166.778 - 13226.356: 94.7539% ( 26) 00:11:11.229 13226.356 - 13285.935: 94.9806% ( 28) 00:11:11.229 13285.935 - 13345.513: 95.1587% ( 22) 00:11:11.229 13345.513 - 13405.091: 95.3611% ( 25) 00:11:11.229 13405.091 - 13464.669: 95.5716% ( 26) 00:11:11.229 13464.669 - 13524.247: 95.7497% ( 22) 00:11:11.230 13524.247 - 13583.825: 95.9197% ( 21) 00:11:11.230 13583.825 - 13643.404: 96.1059% ( 23) 00:11:11.230 13643.404 - 13702.982: 96.2759% ( 21) 00:11:11.230 13702.982 - 13762.560: 96.4621% ( 23) 00:11:11.230 13762.560 - 13822.138: 96.6483% ( 23) 00:11:11.230 13822.138 - 13881.716: 96.8102% ( 20) 00:11:11.230 13881.716 - 13941.295: 96.9964% ( 23) 00:11:11.230 13941.295 - 14000.873: 97.1907% ( 24) 00:11:11.230 14000.873 - 14060.451: 97.3608% ( 21) 00:11:11.230 14060.451 - 14120.029: 97.5793% ( 27) 00:11:11.230 14120.029 - 14179.607: 97.7170% ( 17) 00:11:11.230 14179.607 - 14239.185: 97.8951% ( 22) 00:11:11.230 14239.185 - 14298.764: 98.0570% ( 20) 00:11:11.230 14298.764 - 14358.342: 98.2432% ( 23) 00:11:11.230 14358.342 - 14417.920: 98.3484% ( 13) 00:11:11.230 14417.920 - 14477.498: 98.4618% ( 14) 00:11:11.230 14477.498 - 14537.076: 98.5185% ( 7) 00:11:11.230 14537.076 - 14596.655: 98.5670% ( 6) 00:11:11.230 14596.655 - 14656.233: 98.6156% ( 6) 00:11:11.230 14656.233 - 14715.811: 98.6723% ( 7) 00:11:11.230 14715.811 - 14775.389: 98.7290% ( 7) 00:11:11.230 14775.389 - 14834.967: 98.7856% ( 7) 00:11:11.230 14834.967 - 14894.545: 98.8423% ( 7) 00:11:11.230 14894.545 - 14954.124: 98.8909% ( 6) 00:11:11.230 14954.124 - 15013.702: 98.9475% ( 7) 00:11:11.230 15013.702 - 15073.280: 98.9637% ( 2) 00:11:11.230 19541.644 - 19660.800: 98.9961% ( 4) 00:11:11.230 19660.800 - 19779.956: 99.0204% ( 3) 00:11:11.230 19779.956 - 19899.113: 99.0447% ( 3) 00:11:11.230 19899.113 - 20018.269: 99.0609% ( 2) 00:11:11.230 20018.269 - 20137.425: 99.0852% ( 3) 00:11:11.230 20137.425 - 20256.582: 99.1176% ( 4) 00:11:11.230 20256.582 - 20375.738: 99.1418% ( 3) 00:11:11.230 20375.738 - 20494.895: 99.1580% ( 2) 00:11:11.230 20494.895 - 20614.051: 99.1904% ( 4) 00:11:11.230 20614.051 - 20733.207: 99.2147% ( 3) 00:11:11.230 20733.207 - 20852.364: 99.2390% ( 3) 00:11:11.230 20852.364 - 20971.520: 99.2633% ( 3) 00:11:11.230 20971.520 - 21090.676: 99.2876% ( 3) 00:11:11.230 21090.676 - 21209.833: 99.3119% ( 3) 00:11:11.230 21209.833 - 21328.989: 99.3361% ( 3) 00:11:11.230 21328.989 - 21448.145: 99.3604% ( 3) 00:11:11.230 21448.145 - 21567.302: 99.3928% ( 4) 00:11:11.230 21567.302 - 21686.458: 99.4171% ( 3) 00:11:11.230 21686.458 - 21805.615: 99.4414% ( 3) 00:11:11.230 21805.615 - 21924.771: 99.4657% ( 3) 00:11:11.230 21924.771 - 22043.927: 99.4819% ( 2) 00:11:11.230 26929.338 - 27048.495: 99.5062% ( 3) 00:11:11.230 27048.495 - 27167.651: 99.5304% ( 3) 00:11:11.230 27167.651 - 27286.807: 99.5628% ( 4) 00:11:11.230 27286.807 - 27405.964: 99.5871% ( 3) 00:11:11.230 27405.964 - 27525.120: 99.6114% ( 3) 00:11:11.230 27525.120 - 27644.276: 99.6276% ( 2) 00:11:11.230 27644.276 - 27763.433: 99.6600% ( 4) 00:11:11.230 27763.433 - 27882.589: 99.6843% ( 3) 00:11:11.230 27882.589 - 28001.745: 99.7085% ( 3) 00:11:11.230 28001.745 - 28120.902: 99.7409% ( 4) 00:11:11.230 28120.902 - 28240.058: 99.7652% ( 3) 00:11:11.230 28240.058 - 28359.215: 99.7895% ( 3) 00:11:11.230 28359.215 - 28478.371: 99.8219% ( 4) 00:11:11.230 28478.371 - 28597.527: 99.8462% ( 3) 00:11:11.230 28597.527 - 28716.684: 99.8786% ( 4) 00:11:11.230 28716.684 - 28835.840: 99.9028% ( 3) 00:11:11.230 28835.840 - 28954.996: 99.9271% ( 3) 00:11:11.230 28954.996 - 29074.153: 99.9595% ( 4) 00:11:11.230 29074.153 - 29193.309: 99.9757% ( 2) 00:11:11.230 29193.309 - 29312.465: 100.0000% ( 3) 00:11:11.230 00:11:11.230 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.230 ============================================================================== 00:11:11.230 Range in us Cumulative IO count 00:11:11.230 8340.945 - 8400.524: 0.0729% ( 9) 00:11:11.230 8400.524 - 8460.102: 0.2510% ( 22) 00:11:11.230 8460.102 - 8519.680: 0.5586% ( 38) 00:11:11.230 8519.680 - 8579.258: 0.8824% ( 40) 00:11:11.230 8579.258 - 8638.836: 1.4815% ( 74) 00:11:11.230 8638.836 - 8698.415: 2.2749% ( 98) 00:11:11.230 8698.415 - 8757.993: 3.4407% ( 144) 00:11:11.230 8757.993 - 8817.571: 4.9466% ( 186) 00:11:11.230 8817.571 - 8877.149: 6.6224% ( 207) 00:11:11.230 8877.149 - 8936.727: 8.6545% ( 251) 00:11:11.230 8936.727 - 8996.305: 11.0913% ( 301) 00:11:11.230 8996.305 - 9055.884: 13.8115% ( 336) 00:11:11.230 9055.884 - 9115.462: 16.5803% ( 342) 00:11:11.230 9115.462 - 9175.040: 19.4624% ( 356) 00:11:11.230 9175.040 - 9234.618: 22.4012% ( 363) 00:11:11.230 9234.618 - 9294.196: 25.5991% ( 395) 00:11:11.230 9294.196 - 9353.775: 28.6674% ( 379) 00:11:11.230 9353.775 - 9413.353: 31.7762% ( 384) 00:11:11.230 9413.353 - 9472.931: 34.8608% ( 381) 00:11:11.230 9472.931 - 9532.509: 37.8886% ( 374) 00:11:11.230 9532.509 - 9592.087: 40.9245% ( 375) 00:11:11.230 9592.087 - 9651.665: 44.0172% ( 382) 00:11:11.230 9651.665 - 9711.244: 47.0936% ( 380) 00:11:11.230 9711.244 - 9770.822: 49.9352% ( 351) 00:11:11.230 9770.822 - 9830.400: 52.6069% ( 330) 00:11:11.230 9830.400 - 9889.978: 55.1490% ( 314) 00:11:11.230 9889.978 - 9949.556: 57.3510% ( 272) 00:11:11.230 9949.556 - 10009.135: 59.3426% ( 246) 00:11:11.230 10009.135 - 10068.713: 61.1075% ( 218) 00:11:11.230 10068.713 - 10128.291: 62.6052% ( 185) 00:11:11.230 10128.291 - 10187.869: 63.9249% ( 163) 00:11:11.230 10187.869 - 10247.447: 65.1150% ( 147) 00:11:11.230 10247.447 - 10307.025: 66.2484% ( 140) 00:11:11.230 10307.025 - 10366.604: 67.3980% ( 142) 00:11:11.230 10366.604 - 10426.182: 68.4505% ( 130) 00:11:11.230 10426.182 - 10485.760: 69.4139% ( 119) 00:11:11.230 10485.760 - 10545.338: 70.4420% ( 127) 00:11:11.230 10545.338 - 10604.916: 71.4459% ( 124) 00:11:11.230 10604.916 - 10664.495: 72.5470% ( 136) 00:11:11.230 10664.495 - 10724.073: 73.6480% ( 136) 00:11:11.230 10724.073 - 10783.651: 74.7166% ( 132) 00:11:11.230 10783.651 - 10843.229: 75.8501% ( 140) 00:11:11.230 10843.229 - 10902.807: 77.0078% ( 143) 00:11:11.230 10902.807 - 10962.385: 78.1169% ( 137) 00:11:11.230 10962.385 - 11021.964: 79.2908% ( 145) 00:11:11.230 11021.964 - 11081.542: 80.3109% ( 126) 00:11:11.230 11081.542 - 11141.120: 81.2986% ( 122) 00:11:11.230 11141.120 - 11200.698: 82.1648% ( 107) 00:11:11.230 11200.698 - 11260.276: 82.9258% ( 94) 00:11:11.230 11260.276 - 11319.855: 83.6221% ( 86) 00:11:11.230 11319.855 - 11379.433: 84.2617% ( 79) 00:11:11.230 11379.433 - 11439.011: 84.8527% ( 73) 00:11:11.230 11439.011 - 11498.589: 85.3708% ( 64) 00:11:11.230 11498.589 - 11558.167: 85.9132% ( 67) 00:11:11.230 11558.167 - 11617.745: 86.4556% ( 67) 00:11:11.230 11617.745 - 11677.324: 87.0223% ( 70) 00:11:11.230 11677.324 - 11736.902: 87.5405% ( 64) 00:11:11.230 11736.902 - 11796.480: 88.0019% ( 57) 00:11:11.230 11796.480 - 11856.058: 88.4310% ( 53) 00:11:11.230 11856.058 - 11915.636: 88.8844% ( 56) 00:11:11.230 11915.636 - 11975.215: 89.3054% ( 52) 00:11:11.230 11975.215 - 12034.793: 89.7021% ( 49) 00:11:11.230 12034.793 - 12094.371: 90.0907% ( 48) 00:11:11.230 12094.371 - 12153.949: 90.3821% ( 36) 00:11:11.230 12153.949 - 12213.527: 90.7464% ( 45) 00:11:11.230 12213.527 - 12273.105: 91.0460% ( 37) 00:11:11.230 12273.105 - 12332.684: 91.3374% ( 36) 00:11:11.230 12332.684 - 12392.262: 91.6289% ( 36) 00:11:11.230 12392.262 - 12451.840: 91.8799% ( 31) 00:11:11.230 12451.840 - 12511.418: 92.1713% ( 36) 00:11:11.230 12511.418 - 12570.996: 92.4466% ( 34) 00:11:11.231 12570.996 - 12630.575: 92.7380% ( 36) 00:11:11.231 12630.575 - 12690.153: 93.0052% ( 33) 00:11:11.231 12690.153 - 12749.731: 93.2562% ( 31) 00:11:11.231 12749.731 - 12809.309: 93.5071% ( 31) 00:11:11.231 12809.309 - 12868.887: 93.7500% ( 30) 00:11:11.231 12868.887 - 12928.465: 93.9929% ( 30) 00:11:11.231 12928.465 - 12988.044: 94.2196% ( 28) 00:11:11.231 12988.044 - 13047.622: 94.4301% ( 26) 00:11:11.231 13047.622 - 13107.200: 94.6405% ( 26) 00:11:11.231 13107.200 - 13166.778: 94.8510% ( 26) 00:11:11.231 13166.778 - 13226.356: 95.0615% ( 26) 00:11:11.231 13226.356 - 13285.935: 95.2801% ( 27) 00:11:11.231 13285.935 - 13345.513: 95.4582% ( 22) 00:11:11.231 13345.513 - 13405.091: 95.6040% ( 18) 00:11:11.231 13405.091 - 13464.669: 95.6930% ( 11) 00:11:11.231 13464.669 - 13524.247: 95.8063% ( 14) 00:11:11.231 13524.247 - 13583.825: 95.9359% ( 16) 00:11:11.231 13583.825 - 13643.404: 96.0978% ( 20) 00:11:11.231 13643.404 - 13702.982: 96.2516% ( 19) 00:11:11.231 13702.982 - 13762.560: 96.4297% ( 22) 00:11:11.231 13762.560 - 13822.138: 96.5997% ( 21) 00:11:11.231 13822.138 - 13881.716: 96.7617% ( 20) 00:11:11.231 13881.716 - 13941.295: 96.8912% ( 16) 00:11:11.231 13941.295 - 14000.873: 97.0612% ( 21) 00:11:11.231 14000.873 - 14060.451: 97.2312% ( 21) 00:11:11.231 14060.451 - 14120.029: 97.4012% ( 21) 00:11:11.231 14120.029 - 14179.607: 97.5389% ( 17) 00:11:11.231 14179.607 - 14239.185: 97.6684% ( 16) 00:11:11.231 14239.185 - 14298.764: 97.8222% ( 19) 00:11:11.231 14298.764 - 14358.342: 97.9598% ( 17) 00:11:11.231 14358.342 - 14417.920: 98.0732% ( 14) 00:11:11.231 14417.920 - 14477.498: 98.1946% ( 15) 00:11:11.231 14477.498 - 14537.076: 98.3161% ( 15) 00:11:11.231 14537.076 - 14596.655: 98.4456% ( 16) 00:11:11.231 14596.655 - 14656.233: 98.5670% ( 15) 00:11:11.231 14656.233 - 14715.811: 98.6723% ( 13) 00:11:11.231 14715.811 - 14775.389: 98.7290% ( 7) 00:11:11.231 14775.389 - 14834.967: 98.7694% ( 5) 00:11:11.231 14834.967 - 14894.545: 98.8180% ( 6) 00:11:11.231 14894.545 - 14954.124: 98.8666% ( 6) 00:11:11.231 14954.124 - 15013.702: 98.9152% ( 6) 00:11:11.231 15013.702 - 15073.280: 98.9475% ( 4) 00:11:11.231 15073.280 - 15132.858: 98.9637% ( 2) 00:11:11.231 16920.204 - 17039.360: 98.9880% ( 3) 00:11:11.231 17039.360 - 17158.516: 99.0204% ( 4) 00:11:11.231 17158.516 - 17277.673: 99.0447% ( 3) 00:11:11.231 17277.673 - 17396.829: 99.0690% ( 3) 00:11:11.231 17396.829 - 17515.985: 99.0933% ( 3) 00:11:11.231 17515.985 - 17635.142: 99.1176% ( 3) 00:11:11.231 17635.142 - 17754.298: 99.1418% ( 3) 00:11:11.231 17754.298 - 17873.455: 99.1580% ( 2) 00:11:11.231 17873.455 - 17992.611: 99.1823% ( 3) 00:11:11.231 17992.611 - 18111.767: 99.2147% ( 4) 00:11:11.231 18111.767 - 18230.924: 99.2390% ( 3) 00:11:11.231 18230.924 - 18350.080: 99.2552% ( 2) 00:11:11.231 18350.080 - 18469.236: 99.2876% ( 4) 00:11:11.231 18469.236 - 18588.393: 99.3119% ( 3) 00:11:11.231 18588.393 - 18707.549: 99.3361% ( 3) 00:11:11.231 18707.549 - 18826.705: 99.3604% ( 3) 00:11:11.231 18826.705 - 18945.862: 99.3847% ( 3) 00:11:11.231 18945.862 - 19065.018: 99.4090% ( 3) 00:11:11.231 19065.018 - 19184.175: 99.4414% ( 4) 00:11:11.231 19184.175 - 19303.331: 99.4576% ( 2) 00:11:11.231 19303.331 - 19422.487: 99.4819% ( 3) 00:11:11.231 24307.898 - 24427.055: 99.5142% ( 4) 00:11:11.231 24427.055 - 24546.211: 99.5385% ( 3) 00:11:11.231 24546.211 - 24665.367: 99.5628% ( 3) 00:11:11.231 24665.367 - 24784.524: 99.5952% ( 4) 00:11:11.231 24784.524 - 24903.680: 99.6195% ( 3) 00:11:11.231 24903.680 - 25022.836: 99.6438% ( 3) 00:11:11.231 25022.836 - 25141.993: 99.6762% ( 4) 00:11:11.231 25141.993 - 25261.149: 99.7005% ( 3) 00:11:11.231 25261.149 - 25380.305: 99.7247% ( 3) 00:11:11.231 25380.305 - 25499.462: 99.7490% ( 3) 00:11:11.231 25499.462 - 25618.618: 99.7733% ( 3) 00:11:11.231 25618.618 - 25737.775: 99.7976% ( 3) 00:11:11.231 25737.775 - 25856.931: 99.8219% ( 3) 00:11:11.231 25856.931 - 25976.087: 99.8543% ( 4) 00:11:11.231 25976.087 - 26095.244: 99.8786% ( 3) 00:11:11.231 26095.244 - 26214.400: 99.9109% ( 4) 00:11:11.231 26214.400 - 26333.556: 99.9352% ( 3) 00:11:11.231 26333.556 - 26452.713: 99.9595% ( 3) 00:11:11.231 26452.713 - 26571.869: 99.9919% ( 4) 00:11:11.231 26571.869 - 26691.025: 100.0000% ( 1) 00:11:11.231 00:11:11.231 11:19:33 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:12.610 Initializing NVMe Controllers 00:11:12.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:12.610 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:12.610 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:12.610 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:12.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:12.610 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:12.610 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:12.610 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:12.610 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:12.610 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:12.610 Initialization complete. Launching workers. 00:11:12.610 ======================================================== 00:11:12.610 Latency(us) 00:11:12.610 Device Information : IOPS MiB/s Average min max 00:11:12.610 PCIE (0000:00:10.0) NSID 1 from core 0: 10265.90 120.30 12503.68 8463.09 40919.33 00:11:12.610 PCIE (0000:00:11.0) NSID 1 from core 0: 10265.90 120.30 12478.52 8587.23 38171.25 00:11:12.610 PCIE (0000:00:13.0) NSID 1 from core 0: 10265.90 120.30 12453.33 8613.31 36282.18 00:11:12.610 PCIE (0000:00:12.0) NSID 1 from core 0: 10265.90 120.30 12429.48 8584.62 33741.28 00:11:12.610 PCIE (0000:00:12.0) NSID 2 from core 0: 10265.90 120.30 12398.86 8613.46 30972.75 00:11:12.610 PCIE (0000:00:12.0) NSID 3 from core 0: 10265.90 120.30 12374.87 8656.19 28179.22 00:11:12.610 ======================================================== 00:11:12.610 Total : 61595.42 721.82 12439.79 8463.09 40919.33 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8817.571us 00:11:12.610 10.00000% : 10128.291us 00:11:12.610 25.00000% : 11081.542us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12570.996us 00:11:12.610 90.00000% : 14298.764us 00:11:12.610 95.00000% : 20375.738us 00:11:12.610 98.00000% : 22997.178us 00:11:12.610 99.00000% : 30384.873us 00:11:12.610 99.50000% : 38844.975us 00:11:12.610 99.90000% : 40513.164us 00:11:12.610 99.99000% : 40989.789us 00:11:12.610 99.99900% : 40989.789us 00:11:12.610 99.99990% : 40989.789us 00:11:12.610 99.99999% : 40989.789us 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8877.149us 00:11:12.610 10.00000% : 10128.291us 00:11:12.610 25.00000% : 11141.120us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12511.418us 00:11:12.610 90.00000% : 14239.185us 00:11:12.610 95.00000% : 20733.207us 00:11:12.610 98.00000% : 22997.178us 00:11:12.610 99.00000% : 29074.153us 00:11:12.610 99.50000% : 36461.847us 00:11:12.610 99.90000% : 37891.724us 00:11:12.610 99.99000% : 38368.349us 00:11:12.610 99.99900% : 38368.349us 00:11:12.610 99.99990% : 38368.349us 00:11:12.610 99.99999% : 38368.349us 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8877.149us 00:11:12.610 10.00000% : 10187.869us 00:11:12.610 25.00000% : 11141.120us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12511.418us 00:11:12.610 90.00000% : 14298.764us 00:11:12.610 95.00000% : 20733.207us 00:11:12.610 98.00000% : 22997.178us 00:11:12.610 99.00000% : 27286.807us 00:11:12.610 99.50000% : 34555.345us 00:11:12.610 99.90000% : 35985.222us 00:11:12.610 99.99000% : 36461.847us 00:11:12.610 99.99900% : 36461.847us 00:11:12.610 99.99990% : 36461.847us 00:11:12.610 99.99999% : 36461.847us 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8936.727us 00:11:12.610 10.00000% : 10187.869us 00:11:12.610 25.00000% : 11200.698us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12511.418us 00:11:12.610 90.00000% : 14358.342us 00:11:12.610 95.00000% : 20733.207us 00:11:12.610 98.00000% : 22997.178us 00:11:12.610 99.00000% : 24665.367us 00:11:12.610 99.50000% : 31933.905us 00:11:12.610 99.90000% : 33363.782us 00:11:12.610 99.99000% : 33840.407us 00:11:12.610 99.99900% : 33840.407us 00:11:12.610 99.99990% : 33840.407us 00:11:12.610 99.99999% : 33840.407us 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8936.727us 00:11:12.610 10.00000% : 10187.869us 00:11:12.610 25.00000% : 11141.120us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12511.418us 00:11:12.610 90.00000% : 14537.076us 00:11:12.610 95.00000% : 20733.207us 00:11:12.610 98.00000% : 22520.553us 00:11:12.610 99.00000% : 23235.491us 00:11:12.610 99.50000% : 28835.840us 00:11:12.610 99.90000% : 30504.029us 00:11:12.610 99.99000% : 30980.655us 00:11:12.610 99.99900% : 30980.655us 00:11:12.610 99.99990% : 30980.655us 00:11:12.610 99.99999% : 30980.655us 00:11:12.610 00:11:12.610 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:12.610 ================================================================================= 00:11:12.610 1.00000% : 8877.149us 00:11:12.610 10.00000% : 10187.869us 00:11:12.610 25.00000% : 11141.120us 00:11:12.610 50.00000% : 11796.480us 00:11:12.610 75.00000% : 12511.418us 00:11:12.610 90.00000% : 14417.920us 00:11:12.610 95.00000% : 20256.582us 00:11:12.610 98.00000% : 22282.240us 00:11:12.610 99.00000% : 23235.491us 00:11:12.610 99.50000% : 26452.713us 00:11:12.610 99.90000% : 27882.589us 00:11:12.610 99.99000% : 28240.058us 00:11:12.610 99.99900% : 28240.058us 00:11:12.610 99.99990% : 28240.058us 00:11:12.610 99.99999% : 28240.058us 00:11:12.610 00:11:12.610 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:12.610 ============================================================================== 00:11:12.610 Range in us Cumulative IO count 00:11:12.610 8460.102 - 8519.680: 0.1844% ( 19) 00:11:12.610 8519.680 - 8579.258: 0.3203% ( 14) 00:11:12.610 8579.258 - 8638.836: 0.5823% ( 27) 00:11:12.610 8638.836 - 8698.415: 0.7570% ( 18) 00:11:12.610 8698.415 - 8757.993: 0.9899% ( 24) 00:11:12.610 8757.993 - 8817.571: 1.2616% ( 28) 00:11:12.610 8817.571 - 8877.149: 1.5722% ( 32) 00:11:12.610 8877.149 - 8936.727: 1.8536% ( 29) 00:11:12.610 8936.727 - 8996.305: 2.2030% ( 36) 00:11:12.610 8996.305 - 9055.884: 2.5039% ( 31) 00:11:12.610 9055.884 - 9115.462: 2.7950% ( 30) 00:11:12.610 9115.462 - 9175.040: 3.0765% ( 29) 00:11:12.610 9175.040 - 9234.618: 3.3870% ( 32) 00:11:12.610 9234.618 - 9294.196: 3.6588% ( 28) 00:11:12.610 9294.196 - 9353.775: 4.0761% ( 43) 00:11:12.610 9353.775 - 9413.353: 4.5613% ( 50) 00:11:12.610 9413.353 - 9472.931: 4.9107% ( 36) 00:11:12.610 9472.931 - 9532.509: 5.2407% ( 34) 00:11:12.610 9532.509 - 9592.087: 5.6774% ( 45) 00:11:12.610 9592.087 - 9651.665: 6.1432% ( 48) 00:11:12.610 9651.665 - 9711.244: 6.6382% ( 51) 00:11:12.610 9711.244 - 9770.822: 7.1234% ( 50) 00:11:12.610 9770.822 - 9830.400: 7.6475% ( 54) 00:11:12.610 9830.400 - 9889.978: 8.2686% ( 64) 00:11:12.610 9889.978 - 9949.556: 8.8315% ( 58) 00:11:12.610 9949.556 - 10009.135: 9.3071% ( 49) 00:11:12.610 10009.135 - 10068.713: 9.6661% ( 37) 00:11:12.610 10068.713 - 10128.291: 10.0155% ( 36) 00:11:12.610 10128.291 - 10187.869: 10.4814% ( 48) 00:11:12.610 10187.869 - 10247.447: 11.0637% ( 60) 00:11:12.610 10247.447 - 10307.025: 11.6557% ( 61) 00:11:12.610 10307.025 - 10366.604: 12.4418% ( 81) 00:11:12.610 10366.604 - 10426.182: 13.3346% ( 92) 00:11:12.610 10426.182 - 10485.760: 14.1984% ( 89) 00:11:12.610 10485.760 - 10545.338: 14.9748% ( 80) 00:11:12.610 10545.338 - 10604.916: 15.7512% ( 80) 00:11:12.611 10604.916 - 10664.495: 16.5761% ( 85) 00:11:12.611 10664.495 - 10724.073: 17.4884% ( 94) 00:11:12.611 10724.073 - 10783.651: 18.6530% ( 120) 00:11:12.611 10783.651 - 10843.229: 19.8661% ( 125) 00:11:12.611 10843.229 - 10902.807: 21.0792% ( 125) 00:11:12.611 10902.807 - 10962.385: 22.4476% ( 141) 00:11:12.611 10962.385 - 11021.964: 23.8742% ( 147) 00:11:12.611 11021.964 - 11081.542: 25.6599% ( 184) 00:11:12.611 11081.542 - 11141.120: 27.4651% ( 186) 00:11:12.611 11141.120 - 11200.698: 29.3284% ( 192) 00:11:12.611 11200.698 - 11260.276: 31.3568% ( 209) 00:11:12.611 11260.276 - 11319.855: 33.4239% ( 213) 00:11:12.611 11319.855 - 11379.433: 35.4037% ( 204) 00:11:12.611 11379.433 - 11439.011: 37.3738% ( 203) 00:11:12.611 11439.011 - 11498.589: 39.5672% ( 226) 00:11:12.611 11498.589 - 11558.167: 41.9255% ( 243) 00:11:12.611 11558.167 - 11617.745: 44.2547% ( 240) 00:11:12.611 11617.745 - 11677.324: 46.4286% ( 224) 00:11:12.611 11677.324 - 11736.902: 48.7189% ( 236) 00:11:12.611 11736.902 - 11796.480: 50.7764% ( 212) 00:11:12.611 11796.480 - 11856.058: 52.9212% ( 221) 00:11:12.611 11856.058 - 11915.636: 55.0563% ( 220) 00:11:12.611 11915.636 - 11975.215: 57.2108% ( 222) 00:11:12.611 11975.215 - 12034.793: 59.1615% ( 201) 00:11:12.611 12034.793 - 12094.371: 61.2578% ( 216) 00:11:12.611 12094.371 - 12153.949: 63.3540% ( 216) 00:11:12.611 12153.949 - 12213.527: 65.2465% ( 195) 00:11:12.611 12213.527 - 12273.105: 67.1293% ( 194) 00:11:12.611 12273.105 - 12332.684: 68.8956% ( 182) 00:11:12.611 12332.684 - 12392.262: 70.6328% ( 179) 00:11:12.611 12392.262 - 12451.840: 72.3894% ( 181) 00:11:12.611 12451.840 - 12511.418: 73.8257% ( 148) 00:11:12.611 12511.418 - 12570.996: 75.0679% ( 128) 00:11:12.611 12570.996 - 12630.575: 76.2811% ( 125) 00:11:12.611 12630.575 - 12690.153: 77.4068% ( 116) 00:11:12.611 12690.153 - 12749.731: 78.4647% ( 109) 00:11:12.611 12749.731 - 12809.309: 79.4643% ( 103) 00:11:12.611 12809.309 - 12868.887: 80.2698% ( 83) 00:11:12.611 12868.887 - 12928.465: 81.0850% ( 84) 00:11:12.611 12928.465 - 12988.044: 81.7547% ( 69) 00:11:12.611 12988.044 - 13047.622: 82.5408% ( 81) 00:11:12.611 13047.622 - 13107.200: 83.0745% ( 55) 00:11:12.611 13107.200 - 13166.778: 83.5792% ( 52) 00:11:12.611 13166.778 - 13226.356: 84.0839% ( 52) 00:11:12.611 13226.356 - 13285.935: 84.5206% ( 45) 00:11:12.611 13285.935 - 13345.513: 85.0349% ( 53) 00:11:12.611 13345.513 - 13405.091: 85.4425% ( 42) 00:11:12.611 13405.091 - 13464.669: 85.8793% ( 45) 00:11:12.611 13464.669 - 13524.247: 86.3160% ( 45) 00:11:12.611 13524.247 - 13583.825: 86.6751% ( 37) 00:11:12.611 13583.825 - 13643.404: 87.1700% ( 51) 00:11:12.611 13643.404 - 13702.982: 87.5679% ( 41) 00:11:12.611 13702.982 - 13762.560: 87.9561% ( 40) 00:11:12.611 13762.560 - 13822.138: 88.3152% ( 37) 00:11:12.611 13822.138 - 13881.716: 88.6355% ( 33) 00:11:12.611 13881.716 - 13941.295: 88.8781% ( 25) 00:11:12.611 13941.295 - 14000.873: 89.1498% ( 28) 00:11:12.611 14000.873 - 14060.451: 89.4701% ( 33) 00:11:12.611 14060.451 - 14120.029: 89.6060% ( 14) 00:11:12.611 14120.029 - 14179.607: 89.7613% ( 16) 00:11:12.611 14179.607 - 14239.185: 89.9457% ( 19) 00:11:12.611 14239.185 - 14298.764: 90.1009% ( 16) 00:11:12.611 14298.764 - 14358.342: 90.2659% ( 17) 00:11:12.611 14358.342 - 14417.920: 90.4503% ( 19) 00:11:12.611 14417.920 - 14477.498: 90.6153% ( 17) 00:11:12.611 14477.498 - 14537.076: 90.8191% ( 21) 00:11:12.611 14537.076 - 14596.655: 90.9841% ( 17) 00:11:12.611 14596.655 - 14656.233: 91.0908% ( 11) 00:11:12.611 14656.233 - 14715.811: 91.1782% ( 9) 00:11:12.611 14715.811 - 14775.389: 91.3043% ( 13) 00:11:12.611 14775.389 - 14834.967: 91.3723% ( 7) 00:11:12.611 14834.967 - 14894.545: 91.4499% ( 8) 00:11:12.611 14894.545 - 14954.124: 91.5373% ( 9) 00:11:12.611 14954.124 - 15013.702: 91.6149% ( 8) 00:11:12.611 15013.702 - 15073.280: 91.7217% ( 11) 00:11:12.611 15073.280 - 15132.858: 91.7896% ( 7) 00:11:12.611 15132.858 - 15192.436: 91.8866% ( 10) 00:11:12.611 15192.436 - 15252.015: 91.9546% ( 7) 00:11:12.611 15252.015 - 15371.171: 92.0807% ( 13) 00:11:12.611 15371.171 - 15490.327: 92.1875% ( 11) 00:11:12.611 15490.327 - 15609.484: 92.2845% ( 10) 00:11:12.611 15609.484 - 15728.640: 92.4010% ( 12) 00:11:12.611 15728.640 - 15847.796: 92.5078% ( 11) 00:11:12.611 15847.796 - 15966.953: 92.6145% ( 11) 00:11:12.611 15966.953 - 16086.109: 92.7019% ( 9) 00:11:12.611 16086.109 - 16205.265: 92.7407% ( 4) 00:11:12.611 16205.265 - 16324.422: 92.7795% ( 4) 00:11:12.611 16324.422 - 16443.578: 92.8183% ( 4) 00:11:12.611 16443.578 - 16562.735: 92.8571% ( 4) 00:11:12.611 16562.735 - 16681.891: 92.8960% ( 4) 00:11:12.611 16681.891 - 16801.047: 92.9348% ( 4) 00:11:12.611 16801.047 - 16920.204: 92.9833% ( 5) 00:11:12.611 16920.204 - 17039.360: 93.0512% ( 7) 00:11:12.611 17039.360 - 17158.516: 93.1871% ( 14) 00:11:12.611 17158.516 - 17277.673: 93.3133% ( 13) 00:11:12.611 17277.673 - 17396.829: 93.3618% ( 5) 00:11:12.611 17396.829 - 17515.985: 93.4103% ( 5) 00:11:12.611 17515.985 - 17635.142: 93.4589% ( 5) 00:11:12.611 17635.142 - 17754.298: 93.5074% ( 5) 00:11:12.611 17754.298 - 17873.455: 93.5462% ( 4) 00:11:12.611 17873.455 - 17992.611: 93.6044% ( 6) 00:11:12.611 17992.611 - 18111.767: 93.6530% ( 5) 00:11:12.611 18111.767 - 18230.924: 93.6918% ( 4) 00:11:12.611 18230.924 - 18350.080: 93.7791% ( 9) 00:11:12.611 18350.080 - 18469.236: 93.8373% ( 6) 00:11:12.611 18469.236 - 18588.393: 93.8762% ( 4) 00:11:12.611 18588.393 - 18707.549: 93.9053% ( 3) 00:11:12.611 18707.549 - 18826.705: 93.9441% ( 4) 00:11:12.611 18826.705 - 18945.862: 93.9538% ( 1) 00:11:12.611 18945.862 - 19065.018: 93.9829% ( 3) 00:11:12.611 19065.018 - 19184.175: 94.0023% ( 2) 00:11:12.611 19184.175 - 19303.331: 94.0314% ( 3) 00:11:12.611 19303.331 - 19422.487: 94.0509% ( 2) 00:11:12.611 19422.487 - 19541.644: 94.0897% ( 4) 00:11:12.611 19541.644 - 19660.800: 94.1479% ( 6) 00:11:12.611 19660.800 - 19779.956: 94.2255% ( 8) 00:11:12.611 19779.956 - 19899.113: 94.2935% ( 7) 00:11:12.611 19899.113 - 20018.269: 94.3808% ( 9) 00:11:12.611 20018.269 - 20137.425: 94.6914% ( 32) 00:11:12.611 20137.425 - 20256.582: 94.8758% ( 19) 00:11:12.611 20256.582 - 20375.738: 95.0893% ( 22) 00:11:12.611 20375.738 - 20494.895: 95.2446% ( 16) 00:11:12.611 20494.895 - 20614.051: 95.3804% ( 14) 00:11:12.611 20614.051 - 20733.207: 95.5357% ( 16) 00:11:12.611 20733.207 - 20852.364: 95.6619% ( 13) 00:11:12.611 20852.364 - 20971.520: 95.8366% ( 18) 00:11:12.611 20971.520 - 21090.676: 95.9724% ( 14) 00:11:12.611 21090.676 - 21209.833: 96.0792% ( 11) 00:11:12.611 21209.833 - 21328.989: 96.2345% ( 16) 00:11:12.611 21328.989 - 21448.145: 96.3606% ( 13) 00:11:12.611 21448.145 - 21567.302: 96.5256% ( 17) 00:11:12.611 21567.302 - 21686.458: 96.6227% ( 10) 00:11:12.611 21686.458 - 21805.615: 96.7877% ( 17) 00:11:12.611 21805.615 - 21924.771: 96.9332% ( 15) 00:11:12.611 21924.771 - 22043.927: 97.0594% ( 13) 00:11:12.611 22043.927 - 22163.084: 97.1856% ( 13) 00:11:12.611 22163.084 - 22282.240: 97.3311% ( 15) 00:11:12.611 22282.240 - 22401.396: 97.4379% ( 11) 00:11:12.611 22401.396 - 22520.553: 97.5446% ( 11) 00:11:12.611 22520.553 - 22639.709: 97.6611% ( 12) 00:11:12.611 22639.709 - 22758.865: 97.7873% ( 13) 00:11:12.611 22758.865 - 22878.022: 97.8649% ( 8) 00:11:12.611 22878.022 - 22997.178: 98.0008% ( 14) 00:11:12.611 22997.178 - 23116.335: 98.0881% ( 9) 00:11:12.611 23116.335 - 23235.491: 98.1755% ( 9) 00:11:12.611 23235.491 - 23354.647: 98.3210% ( 15) 00:11:12.611 23354.647 - 23473.804: 98.4181% ( 10) 00:11:12.611 23473.804 - 23592.960: 98.5151% ( 10) 00:11:12.611 23592.960 - 23712.116: 98.6219% ( 11) 00:11:12.611 23712.116 - 23831.273: 98.7092% ( 9) 00:11:12.611 23831.273 - 23950.429: 98.7481% ( 4) 00:11:12.611 23950.429 - 24069.585: 98.7578% ( 1) 00:11:12.611 29193.309 - 29312.465: 98.7675% ( 1) 00:11:12.611 29312.465 - 29431.622: 98.7966% ( 3) 00:11:12.611 29431.622 - 29550.778: 98.8354% ( 4) 00:11:12.611 29550.778 - 29669.935: 98.8548% ( 2) 00:11:12.611 29669.935 - 29789.091: 98.8839% ( 3) 00:11:12.611 29789.091 - 29908.247: 98.9033% ( 2) 00:11:12.611 29908.247 - 30027.404: 98.9227% ( 2) 00:11:12.611 30027.404 - 30146.560: 98.9616% ( 4) 00:11:12.611 30146.560 - 30265.716: 98.9907% ( 3) 00:11:12.611 30265.716 - 30384.873: 99.0198% ( 3) 00:11:12.611 30384.873 - 30504.029: 99.0489% ( 3) 00:11:12.611 30504.029 - 30742.342: 99.0974% ( 5) 00:11:12.611 30742.342 - 30980.655: 99.1460% ( 5) 00:11:12.611 30980.655 - 31218.967: 99.2042% ( 6) 00:11:12.611 31218.967 - 31457.280: 99.2624% ( 6) 00:11:12.611 31457.280 - 31695.593: 99.3207% ( 6) 00:11:12.611 31695.593 - 31933.905: 99.3789% ( 6) 00:11:12.611 38130.036 - 38368.349: 99.3983% ( 2) 00:11:12.611 38368.349 - 38606.662: 99.4468% ( 5) 00:11:12.611 38606.662 - 38844.975: 99.5148% ( 7) 00:11:12.611 38844.975 - 39083.287: 99.5633% ( 5) 00:11:12.611 39083.287 - 39321.600: 99.6215% ( 6) 00:11:12.611 39321.600 - 39559.913: 99.6797% ( 6) 00:11:12.611 39559.913 - 39798.225: 99.7283% ( 5) 00:11:12.611 39798.225 - 40036.538: 99.7865% ( 6) 00:11:12.611 40036.538 - 40274.851: 99.8447% ( 6) 00:11:12.611 40274.851 - 40513.164: 99.9030% ( 6) 00:11:12.611 40513.164 - 40751.476: 99.9709% ( 7) 00:11:12.611 40751.476 - 40989.789: 100.0000% ( 3) 00:11:12.611 00:11:12.611 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:12.611 ============================================================================== 00:11:12.611 Range in us Cumulative IO count 00:11:12.611 8579.258 - 8638.836: 0.0873% ( 9) 00:11:12.611 8638.836 - 8698.415: 0.2717% ( 19) 00:11:12.611 8698.415 - 8757.993: 0.4852% ( 22) 00:11:12.611 8757.993 - 8817.571: 0.7861% ( 31) 00:11:12.611 8817.571 - 8877.149: 1.0481% ( 27) 00:11:12.612 8877.149 - 8936.727: 1.2811% ( 24) 00:11:12.612 8936.727 - 8996.305: 1.5528% ( 28) 00:11:12.612 8996.305 - 9055.884: 1.9410% ( 40) 00:11:12.612 9055.884 - 9115.462: 2.3583% ( 43) 00:11:12.612 9115.462 - 9175.040: 2.8241% ( 48) 00:11:12.612 9175.040 - 9234.618: 3.2415% ( 43) 00:11:12.612 9234.618 - 9294.196: 3.7752% ( 55) 00:11:12.612 9294.196 - 9353.775: 4.1925% ( 43) 00:11:12.612 9353.775 - 9413.353: 4.5419% ( 36) 00:11:12.612 9413.353 - 9472.931: 4.9204% ( 39) 00:11:12.612 9472.931 - 9532.509: 5.3863% ( 48) 00:11:12.612 9532.509 - 9592.087: 5.8909% ( 52) 00:11:12.612 9592.087 - 9651.665: 6.3859% ( 51) 00:11:12.612 9651.665 - 9711.244: 6.8905% ( 52) 00:11:12.612 9711.244 - 9770.822: 7.4049% ( 53) 00:11:12.612 9770.822 - 9830.400: 7.8707% ( 48) 00:11:12.612 9830.400 - 9889.978: 8.2880% ( 43) 00:11:12.612 9889.978 - 9949.556: 8.6859% ( 41) 00:11:12.612 9949.556 - 10009.135: 9.1712% ( 50) 00:11:12.612 10009.135 - 10068.713: 9.6759% ( 52) 00:11:12.612 10068.713 - 10128.291: 10.1805% ( 52) 00:11:12.612 10128.291 - 10187.869: 10.6464% ( 48) 00:11:12.612 10187.869 - 10247.447: 11.0831% ( 45) 00:11:12.612 10247.447 - 10307.025: 11.4227% ( 35) 00:11:12.612 10307.025 - 10366.604: 11.7915% ( 38) 00:11:12.612 10366.604 - 10426.182: 12.3156% ( 54) 00:11:12.612 10426.182 - 10485.760: 12.9755% ( 68) 00:11:12.612 10485.760 - 10545.338: 13.8490% ( 90) 00:11:12.612 10545.338 - 10604.916: 14.7030% ( 88) 00:11:12.612 10604.916 - 10664.495: 15.6250% ( 95) 00:11:12.612 10664.495 - 10724.073: 16.7314% ( 114) 00:11:12.612 10724.073 - 10783.651: 18.0124% ( 132) 00:11:12.612 10783.651 - 10843.229: 19.1285% ( 115) 00:11:12.612 10843.229 - 10902.807: 20.3610% ( 127) 00:11:12.612 10902.807 - 10962.385: 21.5256% ( 120) 00:11:12.612 10962.385 - 11021.964: 22.8261% ( 134) 00:11:12.612 11021.964 - 11081.542: 24.3012% ( 152) 00:11:12.612 11081.542 - 11141.120: 25.7764% ( 152) 00:11:12.612 11141.120 - 11200.698: 27.7659% ( 205) 00:11:12.612 11200.698 - 11260.276: 29.9592% ( 226) 00:11:12.612 11260.276 - 11319.855: 32.0846% ( 219) 00:11:12.612 11319.855 - 11379.433: 34.1615% ( 214) 00:11:12.612 11379.433 - 11439.011: 36.3936% ( 230) 00:11:12.612 11439.011 - 11498.589: 38.6937% ( 237) 00:11:12.612 11498.589 - 11558.167: 41.0229% ( 240) 00:11:12.612 11558.167 - 11617.745: 43.3327% ( 238) 00:11:12.612 11617.745 - 11677.324: 45.6716% ( 241) 00:11:12.612 11677.324 - 11736.902: 48.1561% ( 256) 00:11:12.612 11736.902 - 11796.480: 50.5532% ( 247) 00:11:12.612 11796.480 - 11856.058: 53.0085% ( 253) 00:11:12.612 11856.058 - 11915.636: 55.4639% ( 253) 00:11:12.612 11915.636 - 11975.215: 57.9290% ( 254) 00:11:12.612 11975.215 - 12034.793: 60.1805% ( 232) 00:11:12.612 12034.793 - 12094.371: 62.4515% ( 234) 00:11:12.612 12094.371 - 12153.949: 64.7321% ( 235) 00:11:12.612 12153.949 - 12213.527: 66.9449% ( 228) 00:11:12.612 12213.527 - 12273.105: 69.0314% ( 215) 00:11:12.612 12273.105 - 12332.684: 70.7589% ( 178) 00:11:12.612 12332.684 - 12392.262: 72.3311% ( 162) 00:11:12.612 12392.262 - 12451.840: 73.7675% ( 148) 00:11:12.612 12451.840 - 12511.418: 75.1456% ( 142) 00:11:12.612 12511.418 - 12570.996: 76.4946% ( 139) 00:11:12.612 12570.996 - 12630.575: 77.9018% ( 145) 00:11:12.612 12630.575 - 12690.153: 79.0470% ( 118) 00:11:12.612 12690.153 - 12749.731: 80.0175% ( 100) 00:11:12.612 12749.731 - 12809.309: 80.9200% ( 93) 00:11:12.612 12809.309 - 12868.887: 81.7352% ( 84) 00:11:12.612 12868.887 - 12928.465: 82.5019% ( 79) 00:11:12.612 12928.465 - 12988.044: 83.1134% ( 63) 00:11:12.612 12988.044 - 13047.622: 83.6471% ( 55) 00:11:12.612 13047.622 - 13107.200: 84.1712% ( 54) 00:11:12.612 13107.200 - 13166.778: 84.7438% ( 59) 00:11:12.612 13166.778 - 13226.356: 85.2290% ( 50) 00:11:12.612 13226.356 - 13285.935: 85.7531% ( 54) 00:11:12.612 13285.935 - 13345.513: 86.1898% ( 45) 00:11:12.612 13345.513 - 13405.091: 86.6168% ( 44) 00:11:12.612 13405.091 - 13464.669: 86.9953% ( 39) 00:11:12.612 13464.669 - 13524.247: 87.3350% ( 35) 00:11:12.612 13524.247 - 13583.825: 87.6747% ( 35) 00:11:12.612 13583.825 - 13643.404: 87.9852% ( 32) 00:11:12.612 13643.404 - 13702.982: 88.3734% ( 40) 00:11:12.612 13702.982 - 13762.560: 88.6549% ( 29) 00:11:12.612 13762.560 - 13822.138: 88.8878% ( 24) 00:11:12.612 13822.138 - 13881.716: 89.1304% ( 25) 00:11:12.612 13881.716 - 13941.295: 89.2954% ( 17) 00:11:12.612 13941.295 - 14000.873: 89.4798% ( 19) 00:11:12.612 14000.873 - 14060.451: 89.6836% ( 21) 00:11:12.612 14060.451 - 14120.029: 89.8486% ( 17) 00:11:12.612 14120.029 - 14179.607: 89.9554% ( 11) 00:11:12.612 14179.607 - 14239.185: 90.0621% ( 11) 00:11:12.612 14239.185 - 14298.764: 90.1980% ( 14) 00:11:12.612 14298.764 - 14358.342: 90.3630% ( 17) 00:11:12.612 14358.342 - 14417.920: 90.4697% ( 11) 00:11:12.612 14417.920 - 14477.498: 90.5959% ( 13) 00:11:12.612 14477.498 - 14537.076: 90.6832% ( 9) 00:11:12.612 14537.076 - 14596.655: 90.7706% ( 9) 00:11:12.612 14596.655 - 14656.233: 90.8579% ( 9) 00:11:12.612 14656.233 - 14715.811: 90.9356% ( 8) 00:11:12.612 14715.811 - 14775.389: 91.0132% ( 8) 00:11:12.612 14775.389 - 14834.967: 91.0811% ( 7) 00:11:12.612 14834.967 - 14894.545: 91.1394% ( 6) 00:11:12.612 14894.545 - 14954.124: 91.2170% ( 8) 00:11:12.612 14954.124 - 15013.702: 91.3043% ( 9) 00:11:12.612 15013.702 - 15073.280: 91.3723% ( 7) 00:11:12.612 15073.280 - 15132.858: 91.4596% ( 9) 00:11:12.612 15132.858 - 15192.436: 91.5470% ( 9) 00:11:12.612 15192.436 - 15252.015: 91.6246% ( 8) 00:11:12.612 15252.015 - 15371.171: 91.7896% ( 17) 00:11:12.612 15371.171 - 15490.327: 91.9158% ( 13) 00:11:12.612 15490.327 - 15609.484: 92.0225% ( 11) 00:11:12.612 15609.484 - 15728.640: 92.1293% ( 11) 00:11:12.612 15728.640 - 15847.796: 92.1972% ( 7) 00:11:12.612 15847.796 - 15966.953: 92.2748% ( 8) 00:11:12.612 15966.953 - 16086.109: 92.3525% ( 8) 00:11:12.612 16086.109 - 16205.265: 92.4107% ( 6) 00:11:12.612 16205.265 - 16324.422: 92.4884% ( 8) 00:11:12.612 16324.422 - 16443.578: 92.5563% ( 7) 00:11:12.612 16443.578 - 16562.735: 92.6339% ( 8) 00:11:12.612 16562.735 - 16681.891: 92.7116% ( 8) 00:11:12.612 16681.891 - 16801.047: 92.7892% ( 8) 00:11:12.612 16801.047 - 16920.204: 92.8668% ( 8) 00:11:12.612 16920.204 - 17039.360: 92.9445% ( 8) 00:11:12.612 17039.360 - 17158.516: 93.0124% ( 7) 00:11:12.612 17158.516 - 17277.673: 93.1095% ( 10) 00:11:12.612 17277.673 - 17396.829: 93.2356% ( 13) 00:11:12.612 17396.829 - 17515.985: 93.3812% ( 15) 00:11:12.612 17515.985 - 17635.142: 93.4394% ( 6) 00:11:12.612 17635.142 - 17754.298: 93.5947% ( 16) 00:11:12.612 17754.298 - 17873.455: 93.7112% ( 12) 00:11:12.612 17873.455 - 17992.611: 93.7791% ( 7) 00:11:12.612 17992.611 - 18111.767: 93.8665% ( 9) 00:11:12.612 18111.767 - 18230.924: 93.9538% ( 9) 00:11:12.612 18230.924 - 18350.080: 94.0120% ( 6) 00:11:12.612 18350.080 - 18469.236: 94.0994% ( 9) 00:11:12.612 18469.236 - 18588.393: 94.1673% ( 7) 00:11:12.612 18588.393 - 18707.549: 94.1867% ( 2) 00:11:12.612 18707.549 - 18826.705: 94.2158% ( 3) 00:11:12.612 18826.705 - 18945.862: 94.2450% ( 3) 00:11:12.612 18945.862 - 19065.018: 94.2741% ( 3) 00:11:12.612 19065.018 - 19184.175: 94.3032% ( 3) 00:11:12.612 19184.175 - 19303.331: 94.3420% ( 4) 00:11:12.612 19303.331 - 19422.487: 94.3711% ( 3) 00:11:12.612 19422.487 - 19541.644: 94.4099% ( 4) 00:11:12.612 20018.269 - 20137.425: 94.4488% ( 4) 00:11:12.612 20137.425 - 20256.582: 94.4682% ( 2) 00:11:12.612 20256.582 - 20375.738: 94.4973% ( 3) 00:11:12.612 20375.738 - 20494.895: 94.5749% ( 8) 00:11:12.612 20494.895 - 20614.051: 94.8078% ( 24) 00:11:12.612 20614.051 - 20733.207: 95.0019% ( 20) 00:11:12.612 20733.207 - 20852.364: 95.2349% ( 24) 00:11:12.612 20852.364 - 20971.520: 95.4678% ( 24) 00:11:12.612 20971.520 - 21090.676: 95.6910% ( 23) 00:11:12.612 21090.676 - 21209.833: 95.9142% ( 23) 00:11:12.612 21209.833 - 21328.989: 96.0501% ( 14) 00:11:12.612 21328.989 - 21448.145: 96.1762% ( 13) 00:11:12.612 21448.145 - 21567.302: 96.3218% ( 15) 00:11:12.612 21567.302 - 21686.458: 96.4480% ( 13) 00:11:12.612 21686.458 - 21805.615: 96.6130% ( 17) 00:11:12.612 21805.615 - 21924.771: 96.7585% ( 15) 00:11:12.612 21924.771 - 22043.927: 96.9235% ( 17) 00:11:12.612 22043.927 - 22163.084: 97.0594% ( 14) 00:11:12.612 22163.084 - 22282.240: 97.1953% ( 14) 00:11:12.612 22282.240 - 22401.396: 97.3602% ( 17) 00:11:12.612 22401.396 - 22520.553: 97.5155% ( 16) 00:11:12.612 22520.553 - 22639.709: 97.6611% ( 15) 00:11:12.612 22639.709 - 22758.865: 97.8164% ( 16) 00:11:12.612 22758.865 - 22878.022: 97.9717% ( 16) 00:11:12.612 22878.022 - 22997.178: 98.1269% ( 16) 00:11:12.612 22997.178 - 23116.335: 98.3016% ( 18) 00:11:12.612 23116.335 - 23235.491: 98.4278% ( 13) 00:11:12.612 23235.491 - 23354.647: 98.5540% ( 13) 00:11:12.612 23354.647 - 23473.804: 98.6510% ( 10) 00:11:12.612 23473.804 - 23592.960: 98.7189% ( 7) 00:11:12.612 23592.960 - 23712.116: 98.7578% ( 4) 00:11:12.612 28120.902 - 28240.058: 98.8160% ( 6) 00:11:12.612 28240.058 - 28359.215: 98.8354% ( 2) 00:11:12.612 28359.215 - 28478.371: 98.8548% ( 2) 00:11:12.612 28478.371 - 28597.527: 98.8936% ( 4) 00:11:12.612 28597.527 - 28716.684: 98.9130% ( 2) 00:11:12.612 28716.684 - 28835.840: 98.9519% ( 4) 00:11:12.612 28835.840 - 28954.996: 98.9810% ( 3) 00:11:12.612 28954.996 - 29074.153: 99.0101% ( 3) 00:11:12.612 29074.153 - 29193.309: 99.0392% ( 3) 00:11:12.612 29193.309 - 29312.465: 99.0586% ( 2) 00:11:12.612 29312.465 - 29431.622: 99.0877% ( 3) 00:11:12.612 29431.622 - 29550.778: 99.1168% ( 3) 00:11:12.612 29550.778 - 29669.935: 99.1557% ( 4) 00:11:12.612 29669.935 - 29789.091: 99.1848% ( 3) 00:11:12.612 29789.091 - 29908.247: 99.2139% ( 3) 00:11:12.612 29908.247 - 30027.404: 99.2430% ( 3) 00:11:12.612 30027.404 - 30146.560: 99.2721% ( 3) 00:11:12.612 30146.560 - 30265.716: 99.3012% ( 3) 00:11:12.612 30265.716 - 30384.873: 99.3304% ( 3) 00:11:12.612 30384.873 - 30504.029: 99.3595% ( 3) 00:11:12.612 30504.029 - 30742.342: 99.3789% ( 2) 00:11:12.613 35746.909 - 35985.222: 99.4080% ( 3) 00:11:12.613 35985.222 - 36223.535: 99.4759% ( 7) 00:11:12.613 36223.535 - 36461.847: 99.5439% ( 7) 00:11:12.613 36461.847 - 36700.160: 99.6021% ( 6) 00:11:12.613 36700.160 - 36938.473: 99.6700% ( 7) 00:11:12.613 36938.473 - 37176.785: 99.7186% ( 5) 00:11:12.613 37176.785 - 37415.098: 99.7865% ( 7) 00:11:12.613 37415.098 - 37653.411: 99.8544% ( 7) 00:11:12.613 37653.411 - 37891.724: 99.9224% ( 7) 00:11:12.613 37891.724 - 38130.036: 99.9806% ( 6) 00:11:12.613 38130.036 - 38368.349: 100.0000% ( 2) 00:11:12.613 00:11:12.613 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:12.613 ============================================================================== 00:11:12.613 Range in us Cumulative IO count 00:11:12.613 8579.258 - 8638.836: 0.0388% ( 4) 00:11:12.613 8638.836 - 8698.415: 0.2232% ( 19) 00:11:12.613 8698.415 - 8757.993: 0.5047% ( 29) 00:11:12.613 8757.993 - 8817.571: 0.8637% ( 37) 00:11:12.613 8817.571 - 8877.149: 1.0967% ( 24) 00:11:12.613 8877.149 - 8936.727: 1.3975% ( 31) 00:11:12.613 8936.727 - 8996.305: 1.6984% ( 31) 00:11:12.613 8996.305 - 9055.884: 2.0186% ( 33) 00:11:12.613 9055.884 - 9115.462: 2.3680% ( 36) 00:11:12.613 9115.462 - 9175.040: 2.7174% ( 36) 00:11:12.613 9175.040 - 9234.618: 3.1444% ( 44) 00:11:12.613 9234.618 - 9294.196: 3.6297% ( 50) 00:11:12.613 9294.196 - 9353.775: 4.0082% ( 39) 00:11:12.613 9353.775 - 9413.353: 4.5031% ( 51) 00:11:12.613 9413.353 - 9472.931: 4.9204% ( 43) 00:11:12.613 9472.931 - 9532.509: 5.4057% ( 50) 00:11:12.613 9532.509 - 9592.087: 5.9006% ( 51) 00:11:12.613 9592.087 - 9651.665: 6.3859% ( 50) 00:11:12.613 9651.665 - 9711.244: 6.8517% ( 48) 00:11:12.613 9711.244 - 9770.822: 7.2884% ( 45) 00:11:12.613 9770.822 - 9830.400: 7.7931% ( 52) 00:11:12.613 9830.400 - 9889.978: 8.1910% ( 41) 00:11:12.613 9889.978 - 9949.556: 8.5986% ( 42) 00:11:12.613 9949.556 - 10009.135: 8.9965% ( 41) 00:11:12.613 10009.135 - 10068.713: 9.4332% ( 45) 00:11:12.613 10068.713 - 10128.291: 9.9961% ( 58) 00:11:12.613 10128.291 - 10187.869: 10.6172% ( 64) 00:11:12.613 10187.869 - 10247.447: 11.2189% ( 62) 00:11:12.613 10247.447 - 10307.025: 11.8207% ( 62) 00:11:12.613 10307.025 - 10366.604: 12.6553% ( 86) 00:11:12.613 10366.604 - 10426.182: 13.4026% ( 77) 00:11:12.613 10426.182 - 10485.760: 14.0722% ( 69) 00:11:12.613 10485.760 - 10545.338: 14.8195% ( 77) 00:11:12.613 10545.338 - 10604.916: 15.5862% ( 79) 00:11:12.613 10604.916 - 10664.495: 16.5082% ( 95) 00:11:12.613 10664.495 - 10724.073: 17.4495% ( 97) 00:11:12.613 10724.073 - 10783.651: 18.6335% ( 122) 00:11:12.613 10783.651 - 10843.229: 19.6429% ( 104) 00:11:12.613 10843.229 - 10902.807: 20.6813% ( 107) 00:11:12.613 10902.807 - 10962.385: 21.5839% ( 93) 00:11:12.613 10962.385 - 11021.964: 22.5738% ( 102) 00:11:12.613 11021.964 - 11081.542: 23.8257% ( 129) 00:11:12.613 11081.542 - 11141.120: 25.3397% ( 156) 00:11:12.613 11141.120 - 11200.698: 27.1642% ( 188) 00:11:12.613 11200.698 - 11260.276: 29.4352% ( 234) 00:11:12.613 11260.276 - 11319.855: 31.5217% ( 215) 00:11:12.613 11319.855 - 11379.433: 33.3075% ( 184) 00:11:12.613 11379.433 - 11439.011: 35.2970% ( 205) 00:11:12.613 11439.011 - 11498.589: 37.7911% ( 257) 00:11:12.613 11498.589 - 11558.167: 40.4988% ( 279) 00:11:12.613 11558.167 - 11617.745: 42.8766% ( 245) 00:11:12.613 11617.745 - 11677.324: 45.1863% ( 238) 00:11:12.613 11677.324 - 11736.902: 47.7096% ( 260) 00:11:12.613 11736.902 - 11796.480: 50.1844% ( 255) 00:11:12.613 11796.480 - 11856.058: 52.5621% ( 245) 00:11:12.613 11856.058 - 11915.636: 55.0272% ( 254) 00:11:12.613 11915.636 - 11975.215: 57.4631% ( 251) 00:11:12.613 11975.215 - 12034.793: 59.8311% ( 244) 00:11:12.613 12034.793 - 12094.371: 62.2089% ( 245) 00:11:12.613 12094.371 - 12153.949: 64.5575% ( 242) 00:11:12.613 12153.949 - 12213.527: 66.7023% ( 221) 00:11:12.613 12213.527 - 12273.105: 68.6821% ( 204) 00:11:12.613 12273.105 - 12332.684: 70.5066% ( 188) 00:11:12.613 12332.684 - 12392.262: 72.2244% ( 177) 00:11:12.613 12392.262 - 12451.840: 73.7869% ( 161) 00:11:12.613 12451.840 - 12511.418: 75.3009% ( 156) 00:11:12.613 12511.418 - 12570.996: 76.5625% ( 130) 00:11:12.613 12570.996 - 12630.575: 77.6106% ( 108) 00:11:12.613 12630.575 - 12690.153: 78.6588% ( 108) 00:11:12.613 12690.153 - 12749.731: 79.7554% ( 113) 00:11:12.613 12749.731 - 12809.309: 80.7356% ( 101) 00:11:12.613 12809.309 - 12868.887: 81.6867% ( 98) 00:11:12.613 12868.887 - 12928.465: 82.4437% ( 78) 00:11:12.613 12928.465 - 12988.044: 83.1134% ( 69) 00:11:12.613 12988.044 - 13047.622: 83.6859% ( 59) 00:11:12.613 13047.622 - 13107.200: 84.2488% ( 58) 00:11:12.613 13107.200 - 13166.778: 84.7632% ( 53) 00:11:12.613 13166.778 - 13226.356: 85.3261% ( 58) 00:11:12.613 13226.356 - 13285.935: 85.8405% ( 53) 00:11:12.613 13285.935 - 13345.513: 86.3451% ( 52) 00:11:12.613 13345.513 - 13405.091: 86.7236% ( 39) 00:11:12.613 13405.091 - 13464.669: 87.0827% ( 37) 00:11:12.613 13464.669 - 13524.247: 87.4224% ( 35) 00:11:12.613 13524.247 - 13583.825: 87.7329% ( 32) 00:11:12.613 13583.825 - 13643.404: 88.0144% ( 29) 00:11:12.613 13643.404 - 13702.982: 88.3152% ( 31) 00:11:12.613 13702.982 - 13762.560: 88.5675% ( 26) 00:11:12.613 13762.560 - 13822.138: 88.7908% ( 23) 00:11:12.613 13822.138 - 13881.716: 88.9752% ( 19) 00:11:12.613 13881.716 - 13941.295: 89.1595% ( 19) 00:11:12.613 13941.295 - 14000.873: 89.2857% ( 13) 00:11:12.613 14000.873 - 14060.451: 89.4022% ( 12) 00:11:12.613 14060.451 - 14120.029: 89.5769% ( 18) 00:11:12.613 14120.029 - 14179.607: 89.8098% ( 24) 00:11:12.613 14179.607 - 14239.185: 89.9942% ( 19) 00:11:12.613 14239.185 - 14298.764: 90.1689% ( 18) 00:11:12.613 14298.764 - 14358.342: 90.3339% ( 17) 00:11:12.613 14358.342 - 14417.920: 90.4891% ( 16) 00:11:12.613 14417.920 - 14477.498: 90.5959% ( 11) 00:11:12.613 14477.498 - 14537.076: 90.7220% ( 13) 00:11:12.613 14537.076 - 14596.655: 90.8191% ( 10) 00:11:12.613 14596.655 - 14656.233: 90.8870% ( 7) 00:11:12.613 14656.233 - 14715.811: 90.9453% ( 6) 00:11:12.613 14715.811 - 14775.389: 91.0326% ( 9) 00:11:12.613 14775.389 - 14834.967: 91.0908% ( 6) 00:11:12.613 14834.967 - 14894.545: 91.1685% ( 8) 00:11:12.613 14894.545 - 14954.124: 91.2364% ( 7) 00:11:12.613 14954.124 - 15013.702: 91.3238% ( 9) 00:11:12.613 15013.702 - 15073.280: 91.3723% ( 5) 00:11:12.613 15073.280 - 15132.858: 91.4208% ( 5) 00:11:12.613 15132.858 - 15192.436: 91.4790% ( 6) 00:11:12.613 15192.436 - 15252.015: 91.5373% ( 6) 00:11:12.613 15252.015 - 15371.171: 91.6343% ( 10) 00:11:12.613 15371.171 - 15490.327: 91.7605% ( 13) 00:11:12.613 15490.327 - 15609.484: 91.8964% ( 14) 00:11:12.613 15609.484 - 15728.640: 92.0710% ( 18) 00:11:12.613 15728.640 - 15847.796: 92.1972% ( 13) 00:11:12.613 15847.796 - 15966.953: 92.3137% ( 12) 00:11:12.613 15966.953 - 16086.109: 92.4107% ( 10) 00:11:12.613 16086.109 - 16205.265: 92.5078% ( 10) 00:11:12.613 16205.265 - 16324.422: 92.6048% ( 10) 00:11:12.613 16324.422 - 16443.578: 92.6922% ( 9) 00:11:12.613 16443.578 - 16562.735: 92.7698% ( 8) 00:11:12.613 16562.735 - 16681.891: 92.8474% ( 8) 00:11:12.613 16681.891 - 16801.047: 92.9154% ( 7) 00:11:12.613 16801.047 - 16920.204: 92.9445% ( 3) 00:11:12.613 16920.204 - 17039.360: 92.9930% ( 5) 00:11:12.613 17039.360 - 17158.516: 93.0221% ( 3) 00:11:12.613 17158.516 - 17277.673: 93.1192% ( 10) 00:11:12.613 17277.673 - 17396.829: 93.2939% ( 18) 00:11:12.613 17396.829 - 17515.985: 93.5074% ( 22) 00:11:12.613 17515.985 - 17635.142: 93.6627% ( 16) 00:11:12.613 17635.142 - 17754.298: 93.7403% ( 8) 00:11:12.613 17754.298 - 17873.455: 93.8082% ( 7) 00:11:12.613 17873.455 - 17992.611: 93.8762% ( 7) 00:11:12.613 17992.611 - 18111.767: 93.9635% ( 9) 00:11:12.613 18111.767 - 18230.924: 94.0509% ( 9) 00:11:12.613 18230.924 - 18350.080: 94.1382% ( 9) 00:11:12.613 18350.080 - 18469.236: 94.2158% ( 8) 00:11:12.613 18469.236 - 18588.393: 94.2644% ( 5) 00:11:12.613 18588.393 - 18707.549: 94.2935% ( 3) 00:11:12.613 18707.549 - 18826.705: 94.3323% ( 4) 00:11:12.613 18826.705 - 18945.862: 94.3711% ( 4) 00:11:12.613 18945.862 - 19065.018: 94.4002% ( 3) 00:11:12.613 19065.018 - 19184.175: 94.4099% ( 1) 00:11:12.613 19899.113 - 20018.269: 94.4196% ( 1) 00:11:12.613 20018.269 - 20137.425: 94.4391% ( 2) 00:11:12.613 20137.425 - 20256.582: 94.4682% ( 3) 00:11:12.613 20256.582 - 20375.738: 94.5264% ( 6) 00:11:12.613 20375.738 - 20494.895: 94.6332% ( 11) 00:11:12.613 20494.895 - 20614.051: 94.8758% ( 25) 00:11:12.613 20614.051 - 20733.207: 95.1572% ( 29) 00:11:12.613 20733.207 - 20852.364: 95.4193% ( 27) 00:11:12.613 20852.364 - 20971.520: 95.6328% ( 22) 00:11:12.613 20971.520 - 21090.676: 95.8560% ( 23) 00:11:12.613 21090.676 - 21209.833: 96.0210% ( 17) 00:11:12.613 21209.833 - 21328.989: 96.1859% ( 17) 00:11:12.613 21328.989 - 21448.145: 96.3024% ( 12) 00:11:12.613 21448.145 - 21567.302: 96.4286% ( 13) 00:11:12.613 21567.302 - 21686.458: 96.5741% ( 15) 00:11:12.613 21686.458 - 21805.615: 96.7100% ( 14) 00:11:12.613 21805.615 - 21924.771: 96.8362% ( 13) 00:11:12.613 21924.771 - 22043.927: 96.9818% ( 15) 00:11:12.613 22043.927 - 22163.084: 97.0982% ( 12) 00:11:12.613 22163.084 - 22282.240: 97.2341% ( 14) 00:11:12.613 22282.240 - 22401.396: 97.3894% ( 16) 00:11:12.613 22401.396 - 22520.553: 97.5446% ( 16) 00:11:12.613 22520.553 - 22639.709: 97.6805% ( 14) 00:11:12.613 22639.709 - 22758.865: 97.8164% ( 14) 00:11:12.613 22758.865 - 22878.022: 97.9620% ( 15) 00:11:12.613 22878.022 - 22997.178: 98.1075% ( 15) 00:11:12.613 22997.178 - 23116.335: 98.2628% ( 16) 00:11:12.613 23116.335 - 23235.491: 98.4181% ( 16) 00:11:12.614 23235.491 - 23354.647: 98.5443% ( 13) 00:11:12.614 23354.647 - 23473.804: 98.6704% ( 13) 00:11:12.614 23473.804 - 23592.960: 98.7286% ( 6) 00:11:12.614 23592.960 - 23712.116: 98.7481% ( 2) 00:11:12.614 23712.116 - 23831.273: 98.7578% ( 1) 00:11:12.614 26333.556 - 26452.713: 98.7869% ( 3) 00:11:12.614 26452.713 - 26571.869: 98.8451% ( 6) 00:11:12.614 26571.869 - 26691.025: 98.8645% ( 2) 00:11:12.614 26691.025 - 26810.182: 98.8936% ( 3) 00:11:12.614 26810.182 - 26929.338: 98.9227% ( 3) 00:11:12.614 26929.338 - 27048.495: 98.9519% ( 3) 00:11:12.614 27048.495 - 27167.651: 98.9810% ( 3) 00:11:12.614 27167.651 - 27286.807: 99.0198% ( 4) 00:11:12.614 27286.807 - 27405.964: 99.0489% ( 3) 00:11:12.614 27405.964 - 27525.120: 99.0780% ( 3) 00:11:12.614 27525.120 - 27644.276: 99.1071% ( 3) 00:11:12.614 27644.276 - 27763.433: 99.1363% ( 3) 00:11:12.614 27763.433 - 27882.589: 99.1654% ( 3) 00:11:12.614 27882.589 - 28001.745: 99.1945% ( 3) 00:11:12.614 28001.745 - 28120.902: 99.2333% ( 4) 00:11:12.614 28120.902 - 28240.058: 99.2624% ( 3) 00:11:12.614 28240.058 - 28359.215: 99.2915% ( 3) 00:11:12.614 28359.215 - 28478.371: 99.3304% ( 4) 00:11:12.614 28478.371 - 28597.527: 99.3595% ( 3) 00:11:12.614 28597.527 - 28716.684: 99.3789% ( 2) 00:11:12.614 33840.407 - 34078.720: 99.4080% ( 3) 00:11:12.614 34078.720 - 34317.033: 99.4759% ( 7) 00:11:12.614 34317.033 - 34555.345: 99.5342% ( 6) 00:11:12.614 34555.345 - 34793.658: 99.6021% ( 7) 00:11:12.614 34793.658 - 35031.971: 99.6700% ( 7) 00:11:12.614 35031.971 - 35270.284: 99.7283% ( 6) 00:11:12.614 35270.284 - 35508.596: 99.7962% ( 7) 00:11:12.614 35508.596 - 35746.909: 99.8544% ( 6) 00:11:12.614 35746.909 - 35985.222: 99.9127% ( 6) 00:11:12.614 35985.222 - 36223.535: 99.9806% ( 7) 00:11:12.614 36223.535 - 36461.847: 100.0000% ( 2) 00:11:12.614 00:11:12.614 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:12.614 ============================================================================== 00:11:12.614 Range in us Cumulative IO count 00:11:12.614 8579.258 - 8638.836: 0.0388% ( 4) 00:11:12.614 8638.836 - 8698.415: 0.1844% ( 15) 00:11:12.614 8698.415 - 8757.993: 0.2911% ( 11) 00:11:12.614 8757.993 - 8817.571: 0.4852% ( 20) 00:11:12.614 8817.571 - 8877.149: 0.8249% ( 35) 00:11:12.614 8877.149 - 8936.727: 1.1840% ( 37) 00:11:12.614 8936.727 - 8996.305: 1.5237% ( 35) 00:11:12.614 8996.305 - 9055.884: 1.7857% ( 27) 00:11:12.614 9055.884 - 9115.462: 2.0963% ( 32) 00:11:12.614 9115.462 - 9175.040: 2.4845% ( 40) 00:11:12.614 9175.040 - 9234.618: 2.8727% ( 40) 00:11:12.614 9234.618 - 9294.196: 3.3967% ( 54) 00:11:12.614 9294.196 - 9353.775: 3.9014% ( 52) 00:11:12.614 9353.775 - 9413.353: 4.4061% ( 52) 00:11:12.614 9413.353 - 9472.931: 5.0466% ( 66) 00:11:12.614 9472.931 - 9532.509: 5.5804% ( 55) 00:11:12.614 9532.509 - 9592.087: 6.0947% ( 53) 00:11:12.614 9592.087 - 9651.665: 6.5897% ( 51) 00:11:12.614 9651.665 - 9711.244: 7.1234% ( 55) 00:11:12.614 9711.244 - 9770.822: 7.5019% ( 39) 00:11:12.614 9770.822 - 9830.400: 7.8610% ( 37) 00:11:12.614 9830.400 - 9889.978: 8.1716% ( 32) 00:11:12.614 9889.978 - 9949.556: 8.5016% ( 34) 00:11:12.614 9949.556 - 10009.135: 8.9092% ( 42) 00:11:12.614 10009.135 - 10068.713: 9.2585% ( 36) 00:11:12.614 10068.713 - 10128.291: 9.6370% ( 39) 00:11:12.614 10128.291 - 10187.869: 10.1417% ( 52) 00:11:12.614 10187.869 - 10247.447: 10.6658% ( 54) 00:11:12.614 10247.447 - 10307.025: 11.1025% ( 45) 00:11:12.614 10307.025 - 10366.604: 11.8304% ( 75) 00:11:12.614 10366.604 - 10426.182: 12.6165% ( 81) 00:11:12.614 10426.182 - 10485.760: 13.3832% ( 79) 00:11:12.614 10485.760 - 10545.338: 14.1304% ( 77) 00:11:12.614 10545.338 - 10604.916: 14.9651% ( 86) 00:11:12.614 10604.916 - 10664.495: 16.0520% ( 112) 00:11:12.614 10664.495 - 10724.073: 17.1584% ( 114) 00:11:12.614 10724.073 - 10783.651: 18.1580% ( 103) 00:11:12.614 10783.651 - 10843.229: 19.1188% ( 99) 00:11:12.614 10843.229 - 10902.807: 20.0505% ( 96) 00:11:12.614 10902.807 - 10962.385: 21.1471% ( 113) 00:11:12.614 10962.385 - 11021.964: 22.3311% ( 122) 00:11:12.614 11021.964 - 11081.542: 23.4181% ( 112) 00:11:12.614 11081.542 - 11141.120: 24.9418% ( 157) 00:11:12.614 11141.120 - 11200.698: 26.6790% ( 179) 00:11:12.614 11200.698 - 11260.276: 28.7267% ( 211) 00:11:12.614 11260.276 - 11319.855: 30.7745% ( 211) 00:11:12.614 11319.855 - 11379.433: 33.1328% ( 243) 00:11:12.614 11379.433 - 11439.011: 35.5105% ( 245) 00:11:12.614 11439.011 - 11498.589: 37.9173% ( 248) 00:11:12.614 11498.589 - 11558.167: 40.3533% ( 251) 00:11:12.614 11558.167 - 11617.745: 42.9542% ( 268) 00:11:12.614 11617.745 - 11677.324: 45.5745% ( 270) 00:11:12.614 11677.324 - 11736.902: 47.8649% ( 236) 00:11:12.614 11736.902 - 11796.480: 50.2717% ( 248) 00:11:12.614 11796.480 - 11856.058: 52.5427% ( 234) 00:11:12.614 11856.058 - 11915.636: 54.9010% ( 243) 00:11:12.614 11915.636 - 11975.215: 57.2593% ( 243) 00:11:12.614 11975.215 - 12034.793: 59.7535% ( 257) 00:11:12.614 12034.793 - 12094.371: 62.2962% ( 262) 00:11:12.614 12094.371 - 12153.949: 64.7127% ( 249) 00:11:12.614 12153.949 - 12213.527: 66.9158% ( 227) 00:11:12.614 12213.527 - 12273.105: 68.9926% ( 214) 00:11:12.614 12273.105 - 12332.684: 70.8657% ( 193) 00:11:12.614 12332.684 - 12392.262: 72.5738% ( 176) 00:11:12.614 12392.262 - 12451.840: 74.1266% ( 160) 00:11:12.614 12451.840 - 12511.418: 75.5726% ( 149) 00:11:12.614 12511.418 - 12570.996: 76.8828% ( 135) 00:11:12.614 12570.996 - 12630.575: 78.0085% ( 116) 00:11:12.614 12630.575 - 12690.153: 79.1537% ( 118) 00:11:12.614 12690.153 - 12749.731: 80.1145% ( 99) 00:11:12.614 12749.731 - 12809.309: 81.0365% ( 95) 00:11:12.614 12809.309 - 12868.887: 82.0167% ( 101) 00:11:12.614 12868.887 - 12928.465: 82.7446% ( 75) 00:11:12.614 12928.465 - 12988.044: 83.4239% ( 70) 00:11:12.614 12988.044 - 13047.622: 84.0936% ( 69) 00:11:12.614 13047.622 - 13107.200: 84.6661% ( 59) 00:11:12.614 13107.200 - 13166.778: 85.2776% ( 63) 00:11:12.614 13166.778 - 13226.356: 85.7725% ( 51) 00:11:12.614 13226.356 - 13285.935: 86.2189% ( 46) 00:11:12.614 13285.935 - 13345.513: 86.5877% ( 38) 00:11:12.614 13345.513 - 13405.091: 86.8595% ( 28) 00:11:12.614 13405.091 - 13464.669: 87.1603% ( 31) 00:11:12.614 13464.669 - 13524.247: 87.3738% ( 22) 00:11:12.614 13524.247 - 13583.825: 87.6456% ( 28) 00:11:12.614 13583.825 - 13643.404: 87.9076% ( 27) 00:11:12.614 13643.404 - 13702.982: 88.2182% ( 32) 00:11:12.614 13702.982 - 13762.560: 88.4608% ( 25) 00:11:12.614 13762.560 - 13822.138: 88.6840% ( 23) 00:11:12.614 13822.138 - 13881.716: 88.8393% ( 16) 00:11:12.614 13881.716 - 13941.295: 88.9849% ( 15) 00:11:12.614 13941.295 - 14000.873: 89.1304% ( 15) 00:11:12.614 14000.873 - 14060.451: 89.2469% ( 12) 00:11:12.614 14060.451 - 14120.029: 89.3925% ( 15) 00:11:12.614 14120.029 - 14179.607: 89.5283% ( 14) 00:11:12.614 14179.607 - 14239.185: 89.7321% ( 21) 00:11:12.614 14239.185 - 14298.764: 89.8971% ( 17) 00:11:12.614 14298.764 - 14358.342: 90.0524% ( 16) 00:11:12.614 14358.342 - 14417.920: 90.2077% ( 16) 00:11:12.614 14417.920 - 14477.498: 90.3339% ( 13) 00:11:12.614 14477.498 - 14537.076: 90.4794% ( 15) 00:11:12.614 14537.076 - 14596.655: 90.5959% ( 12) 00:11:12.614 14596.655 - 14656.233: 90.6832% ( 9) 00:11:12.614 14656.233 - 14715.811: 90.7609% ( 8) 00:11:12.614 14715.811 - 14775.389: 90.8385% ( 8) 00:11:12.614 14775.389 - 14834.967: 90.9356% ( 10) 00:11:12.614 14834.967 - 14894.545: 91.0035% ( 7) 00:11:12.614 14894.545 - 14954.124: 91.0617% ( 6) 00:11:12.614 14954.124 - 15013.702: 91.1200% ( 6) 00:11:12.614 15013.702 - 15073.280: 91.1782% ( 6) 00:11:12.614 15073.280 - 15132.858: 91.2461% ( 7) 00:11:12.614 15132.858 - 15192.436: 91.2946% ( 5) 00:11:12.614 15192.436 - 15252.015: 91.3626% ( 7) 00:11:12.614 15252.015 - 15371.171: 91.4887% ( 13) 00:11:12.614 15371.171 - 15490.327: 91.6052% ( 12) 00:11:12.614 15490.327 - 15609.484: 91.7896% ( 19) 00:11:12.614 15609.484 - 15728.640: 91.9740% ( 19) 00:11:12.614 15728.640 - 15847.796: 92.0807% ( 11) 00:11:12.614 15847.796 - 15966.953: 92.1487% ( 7) 00:11:12.614 15966.953 - 16086.109: 92.2263% ( 8) 00:11:12.615 16086.109 - 16205.265: 92.3040% ( 8) 00:11:12.615 16205.265 - 16324.422: 92.3913% ( 9) 00:11:12.615 16324.422 - 16443.578: 92.6825% ( 30) 00:11:12.615 16443.578 - 16562.735: 92.7698% ( 9) 00:11:12.615 16562.735 - 16681.891: 92.8183% ( 5) 00:11:12.615 16681.891 - 16801.047: 92.8863% ( 7) 00:11:12.615 16801.047 - 16920.204: 92.9445% ( 6) 00:11:12.615 16920.204 - 17039.360: 93.0124% ( 7) 00:11:12.615 17039.360 - 17158.516: 93.0901% ( 8) 00:11:12.615 17158.516 - 17277.673: 93.1677% ( 8) 00:11:12.615 17277.673 - 17396.829: 93.3230% ( 16) 00:11:12.615 17396.829 - 17515.985: 93.4589% ( 14) 00:11:12.615 17515.985 - 17635.142: 93.5559% ( 10) 00:11:12.615 17635.142 - 17754.298: 93.6724% ( 12) 00:11:12.615 17754.298 - 17873.455: 93.7985% ( 13) 00:11:12.615 17873.455 - 17992.611: 93.9150% ( 12) 00:11:12.615 17992.611 - 18111.767: 94.0411% ( 13) 00:11:12.615 18111.767 - 18230.924: 94.1576% ( 12) 00:11:12.615 18230.924 - 18350.080: 94.2450% ( 9) 00:11:12.615 18350.080 - 18469.236: 94.3420% ( 10) 00:11:12.615 18469.236 - 18588.393: 94.3614% ( 2) 00:11:12.615 18588.393 - 18707.549: 94.4002% ( 4) 00:11:12.615 18707.549 - 18826.705: 94.4099% ( 1) 00:11:12.615 19899.113 - 20018.269: 94.4293% ( 2) 00:11:12.615 20018.269 - 20137.425: 94.4391% ( 1) 00:11:12.615 20137.425 - 20256.582: 94.4682% ( 3) 00:11:12.615 20256.582 - 20375.738: 94.4876% ( 2) 00:11:12.615 20375.738 - 20494.895: 94.6914% ( 21) 00:11:12.615 20494.895 - 20614.051: 94.9049% ( 22) 00:11:12.615 20614.051 - 20733.207: 95.1475% ( 25) 00:11:12.615 20733.207 - 20852.364: 95.4290% ( 29) 00:11:12.615 20852.364 - 20971.520: 95.6425% ( 22) 00:11:12.615 20971.520 - 21090.676: 95.8463% ( 21) 00:11:12.615 21090.676 - 21209.833: 95.9821% ( 14) 00:11:12.615 21209.833 - 21328.989: 96.1180% ( 14) 00:11:12.615 21328.989 - 21448.145: 96.2830% ( 17) 00:11:12.615 21448.145 - 21567.302: 96.3995% ( 12) 00:11:12.615 21567.302 - 21686.458: 96.5353% ( 14) 00:11:12.615 21686.458 - 21805.615: 96.6615% ( 13) 00:11:12.615 21805.615 - 21924.771: 96.7974% ( 14) 00:11:12.615 21924.771 - 22043.927: 96.9526% ( 16) 00:11:12.615 22043.927 - 22163.084: 97.0788% ( 13) 00:11:12.615 22163.084 - 22282.240: 97.2244% ( 15) 00:11:12.615 22282.240 - 22401.396: 97.3700% ( 15) 00:11:12.615 22401.396 - 22520.553: 97.5252% ( 16) 00:11:12.615 22520.553 - 22639.709: 97.6708% ( 15) 00:11:12.615 22639.709 - 22758.865: 97.8067% ( 14) 00:11:12.615 22758.865 - 22878.022: 97.9717% ( 17) 00:11:12.615 22878.022 - 22997.178: 98.1172% ( 15) 00:11:12.615 22997.178 - 23116.335: 98.2628% ( 15) 00:11:12.615 23116.335 - 23235.491: 98.3987% ( 14) 00:11:12.615 23235.491 - 23354.647: 98.5540% ( 16) 00:11:12.615 23354.647 - 23473.804: 98.6898% ( 14) 00:11:12.615 23473.804 - 23592.960: 98.7384% ( 5) 00:11:12.615 23592.960 - 23712.116: 98.7578% ( 2) 00:11:12.615 23831.273 - 23950.429: 98.8257% ( 7) 00:11:12.615 23950.429 - 24069.585: 98.8936% ( 7) 00:11:12.615 24069.585 - 24188.742: 98.9130% ( 2) 00:11:12.615 24188.742 - 24307.898: 98.9325% ( 2) 00:11:12.615 24307.898 - 24427.055: 98.9519% ( 2) 00:11:12.615 24427.055 - 24546.211: 98.9810% ( 3) 00:11:12.615 24546.211 - 24665.367: 99.0101% ( 3) 00:11:12.615 24665.367 - 24784.524: 99.0392% ( 3) 00:11:12.615 24784.524 - 24903.680: 99.0586% ( 2) 00:11:12.615 24903.680 - 25022.836: 99.0877% ( 3) 00:11:12.615 25022.836 - 25141.993: 99.1168% ( 3) 00:11:12.615 25141.993 - 25261.149: 99.1460% ( 3) 00:11:12.615 25261.149 - 25380.305: 99.1751% ( 3) 00:11:12.615 25380.305 - 25499.462: 99.2139% ( 4) 00:11:12.615 25499.462 - 25618.618: 99.2430% ( 3) 00:11:12.615 25618.618 - 25737.775: 99.2721% ( 3) 00:11:12.615 25737.775 - 25856.931: 99.3012% ( 3) 00:11:12.615 25856.931 - 25976.087: 99.3401% ( 4) 00:11:12.615 25976.087 - 26095.244: 99.3692% ( 3) 00:11:12.615 26095.244 - 26214.400: 99.3789% ( 1) 00:11:12.615 31218.967 - 31457.280: 99.3886% ( 1) 00:11:12.615 31457.280 - 31695.593: 99.4468% ( 6) 00:11:12.615 31695.593 - 31933.905: 99.5148% ( 7) 00:11:12.615 31933.905 - 32172.218: 99.5827% ( 7) 00:11:12.615 32172.218 - 32410.531: 99.6506% ( 7) 00:11:12.615 32410.531 - 32648.844: 99.7089% ( 6) 00:11:12.615 32648.844 - 32887.156: 99.7768% ( 7) 00:11:12.615 32887.156 - 33125.469: 99.8350% ( 6) 00:11:12.615 33125.469 - 33363.782: 99.9030% ( 7) 00:11:12.615 33363.782 - 33602.095: 99.9612% ( 6) 00:11:12.615 33602.095 - 33840.407: 100.0000% ( 4) 00:11:12.615 00:11:12.615 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:12.615 ============================================================================== 00:11:12.615 Range in us Cumulative IO count 00:11:12.615 8579.258 - 8638.836: 0.0485% ( 5) 00:11:12.615 8638.836 - 8698.415: 0.1165% ( 7) 00:11:12.615 8698.415 - 8757.993: 0.3106% ( 20) 00:11:12.615 8757.993 - 8817.571: 0.6211% ( 32) 00:11:12.615 8817.571 - 8877.149: 0.8929% ( 28) 00:11:12.615 8877.149 - 8936.727: 1.2034% ( 32) 00:11:12.615 8936.727 - 8996.305: 1.5334% ( 34) 00:11:12.615 8996.305 - 9055.884: 1.8342% ( 31) 00:11:12.615 9055.884 - 9115.462: 2.1836% ( 36) 00:11:12.615 9115.462 - 9175.040: 2.5524% ( 38) 00:11:12.615 9175.040 - 9234.618: 2.9406% ( 40) 00:11:12.615 9234.618 - 9294.196: 3.3482% ( 42) 00:11:12.615 9294.196 - 9353.775: 3.7461% ( 41) 00:11:12.615 9353.775 - 9413.353: 4.2896% ( 56) 00:11:12.615 9413.353 - 9472.931: 4.9010% ( 63) 00:11:12.615 9472.931 - 9532.509: 5.4251% ( 54) 00:11:12.615 9532.509 - 9592.087: 5.9394% ( 53) 00:11:12.615 9592.087 - 9651.665: 6.4441% ( 52) 00:11:12.615 9651.665 - 9711.244: 6.9196% ( 49) 00:11:12.615 9711.244 - 9770.822: 7.4340% ( 53) 00:11:12.615 9770.822 - 9830.400: 7.7349% ( 31) 00:11:12.615 9830.400 - 9889.978: 8.1134% ( 39) 00:11:12.615 9889.978 - 9949.556: 8.5210% ( 42) 00:11:12.615 9949.556 - 10009.135: 8.9771% ( 47) 00:11:12.615 10009.135 - 10068.713: 9.3168% ( 35) 00:11:12.615 10068.713 - 10128.291: 9.8020% ( 50) 00:11:12.615 10128.291 - 10187.869: 10.2484% ( 46) 00:11:12.615 10187.869 - 10247.447: 10.8210% ( 59) 00:11:12.615 10247.447 - 10307.025: 11.4713% ( 67) 00:11:12.615 10307.025 - 10366.604: 12.1118% ( 66) 00:11:12.615 10366.604 - 10426.182: 12.8688% ( 78) 00:11:12.615 10426.182 - 10485.760: 13.6840% ( 84) 00:11:12.615 10485.760 - 10545.338: 14.6351% ( 98) 00:11:12.615 10545.338 - 10604.916: 15.5765% ( 97) 00:11:12.615 10604.916 - 10664.495: 16.4596% ( 91) 00:11:12.615 10664.495 - 10724.073: 17.3816% ( 95) 00:11:12.615 10724.073 - 10783.651: 18.4491% ( 110) 00:11:12.615 10783.651 - 10843.229: 19.3711% ( 95) 00:11:12.615 10843.229 - 10902.807: 20.3028% ( 96) 00:11:12.615 10902.807 - 10962.385: 21.3121% ( 104) 00:11:12.615 10962.385 - 11021.964: 22.2826% ( 100) 00:11:12.615 11021.964 - 11081.542: 23.3696% ( 112) 00:11:12.615 11081.542 - 11141.120: 25.1262% ( 181) 00:11:12.615 11141.120 - 11200.698: 26.9216% ( 185) 00:11:12.615 11200.698 - 11260.276: 28.6200% ( 175) 00:11:12.615 11260.276 - 11319.855: 30.4057% ( 184) 00:11:12.615 11319.855 - 11379.433: 32.4534% ( 211) 00:11:12.615 11379.433 - 11439.011: 34.8797% ( 250) 00:11:12.615 11439.011 - 11498.589: 37.7426% ( 295) 00:11:12.615 11498.589 - 11558.167: 40.5765% ( 292) 00:11:12.615 11558.167 - 11617.745: 43.4297% ( 294) 00:11:12.615 11617.745 - 11677.324: 45.8366% ( 248) 00:11:12.615 11677.324 - 11736.902: 48.1561% ( 239) 00:11:12.615 11736.902 - 11796.480: 50.4561% ( 237) 00:11:12.615 11796.480 - 11856.058: 52.6203% ( 223) 00:11:12.615 11856.058 - 11915.636: 55.0078% ( 246) 00:11:12.615 11915.636 - 11975.215: 57.2593% ( 232) 00:11:12.615 11975.215 - 12034.793: 59.6079% ( 242) 00:11:12.615 12034.793 - 12094.371: 61.8983% ( 236) 00:11:12.615 12094.371 - 12153.949: 64.1887% ( 236) 00:11:12.615 12153.949 - 12213.527: 66.3723% ( 225) 00:11:12.615 12213.527 - 12273.105: 68.5462% ( 224) 00:11:12.615 12273.105 - 12332.684: 70.7007% ( 222) 00:11:12.615 12332.684 - 12392.262: 72.5641% ( 192) 00:11:12.615 12392.262 - 12451.840: 74.1168% ( 160) 00:11:12.615 12451.840 - 12511.418: 75.4950% ( 142) 00:11:12.615 12511.418 - 12570.996: 76.7760% ( 132) 00:11:12.615 12570.996 - 12630.575: 78.0474% ( 131) 00:11:12.615 12630.575 - 12690.153: 79.0470% ( 103) 00:11:12.615 12690.153 - 12749.731: 79.9981% ( 98) 00:11:12.615 12749.731 - 12809.309: 80.7939% ( 82) 00:11:12.615 12809.309 - 12868.887: 81.4732% ( 70) 00:11:12.615 12868.887 - 12928.465: 82.1332% ( 68) 00:11:12.615 12928.465 - 12988.044: 82.7057% ( 59) 00:11:12.615 12988.044 - 13047.622: 83.2686% ( 58) 00:11:12.615 13047.622 - 13107.200: 83.7927% ( 54) 00:11:12.615 13107.200 - 13166.778: 84.3653% ( 59) 00:11:12.615 13166.778 - 13226.356: 84.9282% ( 58) 00:11:12.615 13226.356 - 13285.935: 85.4134% ( 50) 00:11:12.615 13285.935 - 13345.513: 85.8502% ( 45) 00:11:12.615 13345.513 - 13405.091: 86.2578% ( 42) 00:11:12.615 13405.091 - 13464.669: 86.6363% ( 39) 00:11:12.615 13464.669 - 13524.247: 86.8983% ( 27) 00:11:12.615 13524.247 - 13583.825: 87.1894% ( 30) 00:11:12.615 13583.825 - 13643.404: 87.4321% ( 25) 00:11:12.616 13643.404 - 13702.982: 87.6553% ( 23) 00:11:12.616 13702.982 - 13762.560: 87.8494% ( 20) 00:11:12.616 13762.560 - 13822.138: 88.0532% ( 21) 00:11:12.616 13822.138 - 13881.716: 88.2473% ( 20) 00:11:12.616 13881.716 - 13941.295: 88.4511% ( 21) 00:11:12.616 13941.295 - 14000.873: 88.6161% ( 17) 00:11:12.616 14000.873 - 14060.451: 88.7714% ( 16) 00:11:12.616 14060.451 - 14120.029: 88.9460% ( 18) 00:11:12.616 14120.029 - 14179.607: 89.1207% ( 18) 00:11:12.616 14179.607 - 14239.185: 89.2954% ( 18) 00:11:12.616 14239.185 - 14298.764: 89.4507% ( 16) 00:11:12.616 14298.764 - 14358.342: 89.5575% ( 11) 00:11:12.616 14358.342 - 14417.920: 89.7030% ( 15) 00:11:12.616 14417.920 - 14477.498: 89.8874% ( 19) 00:11:12.616 14477.498 - 14537.076: 90.0524% ( 17) 00:11:12.616 14537.076 - 14596.655: 90.1786% ( 13) 00:11:12.616 14596.655 - 14656.233: 90.2853% ( 11) 00:11:12.616 14656.233 - 14715.811: 90.4018% ( 12) 00:11:12.616 14715.811 - 14775.389: 90.5377% ( 14) 00:11:12.616 14775.389 - 14834.967: 90.6347% ( 10) 00:11:12.616 14834.967 - 14894.545: 90.7803% ( 15) 00:11:12.616 14894.545 - 14954.124: 90.8967% ( 12) 00:11:12.616 14954.124 - 15013.702: 90.9744% ( 8) 00:11:12.616 15013.702 - 15073.280: 91.0326% ( 6) 00:11:12.616 15073.280 - 15132.858: 91.0908% ( 6) 00:11:12.616 15132.858 - 15192.436: 91.1394% ( 5) 00:11:12.616 15192.436 - 15252.015: 91.1879% ( 5) 00:11:12.616 15252.015 - 15371.171: 91.4596% ( 28) 00:11:12.616 15371.171 - 15490.327: 91.6149% ( 16) 00:11:12.616 15490.327 - 15609.484: 91.8478% ( 24) 00:11:12.616 15609.484 - 15728.640: 92.0225% ( 18) 00:11:12.616 15728.640 - 15847.796: 92.1778% ( 16) 00:11:12.616 15847.796 - 15966.953: 92.2845% ( 11) 00:11:12.616 15966.953 - 16086.109: 92.4204% ( 14) 00:11:12.616 16086.109 - 16205.265: 92.5757% ( 16) 00:11:12.616 16205.265 - 16324.422: 92.7310% ( 16) 00:11:12.616 16324.422 - 16443.578: 92.9154% ( 19) 00:11:12.616 16443.578 - 16562.735: 93.0901% ( 18) 00:11:12.616 16562.735 - 16681.891: 93.1871% ( 10) 00:11:12.616 16681.891 - 16801.047: 93.2939% ( 11) 00:11:12.616 16801.047 - 16920.204: 93.4006% ( 11) 00:11:12.616 16920.204 - 17039.360: 93.5171% ( 12) 00:11:12.616 17039.360 - 17158.516: 93.6238% ( 11) 00:11:12.616 17158.516 - 17277.673: 93.7112% ( 9) 00:11:12.616 17277.673 - 17396.829: 93.7694% ( 6) 00:11:12.616 17396.829 - 17515.985: 93.7888% ( 2) 00:11:12.616 17635.142 - 17754.298: 93.8179% ( 3) 00:11:12.616 17754.298 - 17873.455: 93.8568% ( 4) 00:11:12.616 17873.455 - 17992.611: 93.9053% ( 5) 00:11:12.616 17992.611 - 18111.767: 93.9441% ( 4) 00:11:12.616 18111.767 - 18230.924: 93.9829% ( 4) 00:11:12.616 18230.924 - 18350.080: 94.0120% ( 3) 00:11:12.616 18350.080 - 18469.236: 94.0606% ( 5) 00:11:12.616 18469.236 - 18588.393: 94.1091% ( 5) 00:11:12.616 18588.393 - 18707.549: 94.1479% ( 4) 00:11:12.616 18707.549 - 18826.705: 94.1673% ( 2) 00:11:12.616 18826.705 - 18945.862: 94.1867% ( 2) 00:11:12.616 18945.862 - 19065.018: 94.2158% ( 3) 00:11:12.616 19065.018 - 19184.175: 94.2450% ( 3) 00:11:12.616 19184.175 - 19303.331: 94.2741% ( 3) 00:11:12.616 19303.331 - 19422.487: 94.3129% ( 4) 00:11:12.616 19422.487 - 19541.644: 94.3420% ( 3) 00:11:12.616 19541.644 - 19660.800: 94.3711% ( 3) 00:11:12.616 19660.800 - 19779.956: 94.4099% ( 4) 00:11:12.616 19899.113 - 20018.269: 94.4196% ( 1) 00:11:12.616 20018.269 - 20137.425: 94.4585% ( 4) 00:11:12.616 20137.425 - 20256.582: 94.4779% ( 2) 00:11:12.616 20256.582 - 20375.738: 94.5167% ( 4) 00:11:12.616 20375.738 - 20494.895: 94.7011% ( 19) 00:11:12.616 20494.895 - 20614.051: 94.9243% ( 23) 00:11:12.616 20614.051 - 20733.207: 95.2057% ( 29) 00:11:12.616 20733.207 - 20852.364: 95.4193% ( 22) 00:11:12.616 20852.364 - 20971.520: 95.6716% ( 26) 00:11:12.616 20971.520 - 21090.676: 95.9239% ( 26) 00:11:12.616 21090.676 - 21209.833: 96.1180% ( 20) 00:11:12.616 21209.833 - 21328.989: 96.3121% ( 20) 00:11:12.616 21328.989 - 21448.145: 96.5159% ( 21) 00:11:12.616 21448.145 - 21567.302: 96.6615% ( 15) 00:11:12.616 21567.302 - 21686.458: 96.8071% ( 15) 00:11:12.616 21686.458 - 21805.615: 96.9915% ( 19) 00:11:12.616 21805.615 - 21924.771: 97.1467% ( 16) 00:11:12.616 21924.771 - 22043.927: 97.3117% ( 17) 00:11:12.616 22043.927 - 22163.084: 97.4767% ( 17) 00:11:12.616 22163.084 - 22282.240: 97.6514% ( 18) 00:11:12.616 22282.240 - 22401.396: 97.8649% ( 22) 00:11:12.616 22401.396 - 22520.553: 98.0299% ( 17) 00:11:12.616 22520.553 - 22639.709: 98.2337% ( 21) 00:11:12.616 22639.709 - 22758.865: 98.4278% ( 20) 00:11:12.616 22758.865 - 22878.022: 98.6025% ( 18) 00:11:12.616 22878.022 - 22997.178: 98.8063% ( 21) 00:11:12.616 22997.178 - 23116.335: 98.9519% ( 15) 00:11:12.616 23116.335 - 23235.491: 99.0780% ( 13) 00:11:12.616 23235.491 - 23354.647: 99.2236% ( 15) 00:11:12.616 23354.647 - 23473.804: 99.3207% ( 10) 00:11:12.616 23473.804 - 23592.960: 99.3595% ( 4) 00:11:12.616 23592.960 - 23712.116: 99.3789% ( 2) 00:11:12.616 28359.215 - 28478.371: 99.4080% ( 3) 00:11:12.616 28478.371 - 28597.527: 99.4468% ( 4) 00:11:12.616 28597.527 - 28716.684: 99.4759% ( 3) 00:11:12.616 28716.684 - 28835.840: 99.5050% ( 3) 00:11:12.616 28835.840 - 28954.996: 99.5342% ( 3) 00:11:12.616 28954.996 - 29074.153: 99.5633% ( 3) 00:11:12.616 29074.153 - 29193.309: 99.5924% ( 3) 00:11:12.616 29193.309 - 29312.465: 99.6312% ( 4) 00:11:12.616 29312.465 - 29431.622: 99.6603% ( 3) 00:11:12.616 29431.622 - 29550.778: 99.6894% ( 3) 00:11:12.616 29550.778 - 29669.935: 99.7186% ( 3) 00:11:12.616 29669.935 - 29789.091: 99.7574% ( 4) 00:11:12.616 29789.091 - 29908.247: 99.7865% ( 3) 00:11:12.616 29908.247 - 30027.404: 99.8156% ( 3) 00:11:12.616 30027.404 - 30146.560: 99.8350% ( 2) 00:11:12.616 30146.560 - 30265.716: 99.8641% ( 3) 00:11:12.616 30265.716 - 30384.873: 99.8835% ( 2) 00:11:12.616 30384.873 - 30504.029: 99.9030% ( 2) 00:11:12.616 30504.029 - 30742.342: 99.9515% ( 5) 00:11:12.616 30742.342 - 30980.655: 100.0000% ( 5) 00:11:12.616 00:11:12.616 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:12.616 ============================================================================== 00:11:12.616 Range in us Cumulative IO count 00:11:12.616 8638.836 - 8698.415: 0.1359% ( 14) 00:11:12.616 8698.415 - 8757.993: 0.3979% ( 27) 00:11:12.616 8757.993 - 8817.571: 0.7861% ( 40) 00:11:12.616 8817.571 - 8877.149: 1.2325% ( 46) 00:11:12.616 8877.149 - 8936.727: 1.6887% ( 47) 00:11:12.616 8936.727 - 8996.305: 2.0866% ( 41) 00:11:12.616 8996.305 - 9055.884: 2.3486% ( 27) 00:11:12.616 9055.884 - 9115.462: 2.6300% ( 29) 00:11:12.616 9115.462 - 9175.040: 2.9794% ( 36) 00:11:12.616 9175.040 - 9234.618: 3.3094% ( 34) 00:11:12.616 9234.618 - 9294.196: 3.6297% ( 33) 00:11:12.616 9294.196 - 9353.775: 4.0858% ( 47) 00:11:12.616 9353.775 - 9413.353: 4.4546% ( 38) 00:11:12.616 9413.353 - 9472.931: 4.8331% ( 39) 00:11:12.616 9472.931 - 9532.509: 5.2310% ( 41) 00:11:12.616 9532.509 - 9592.087: 5.6871% ( 47) 00:11:12.616 9592.087 - 9651.665: 6.2888% ( 62) 00:11:12.616 9651.665 - 9711.244: 6.8323% ( 56) 00:11:12.616 9711.244 - 9770.822: 7.3467% ( 53) 00:11:12.616 9770.822 - 9830.400: 7.7252% ( 39) 00:11:12.616 9830.400 - 9889.978: 8.1134% ( 40) 00:11:12.616 9889.978 - 9949.556: 8.4336% ( 33) 00:11:12.616 9949.556 - 10009.135: 8.7539% ( 33) 00:11:12.616 10009.135 - 10068.713: 9.1033% ( 36) 00:11:12.616 10068.713 - 10128.291: 9.5691% ( 48) 00:11:12.616 10128.291 - 10187.869: 10.1708% ( 62) 00:11:12.616 10187.869 - 10247.447: 10.7143% ( 56) 00:11:12.616 10247.447 - 10307.025: 11.2286% ( 53) 00:11:12.616 10307.025 - 10366.604: 11.6654% ( 45) 00:11:12.616 10366.604 - 10426.182: 12.2186% ( 57) 00:11:12.616 10426.182 - 10485.760: 12.8591% ( 66) 00:11:12.616 10485.760 - 10545.338: 13.6064% ( 77) 00:11:12.616 10545.338 - 10604.916: 14.4604% ( 88) 00:11:12.616 10604.916 - 10664.495: 15.4697% ( 104) 00:11:12.616 10664.495 - 10724.073: 16.6731% ( 124) 00:11:12.616 10724.073 - 10783.651: 17.8183% ( 118) 00:11:12.616 10783.651 - 10843.229: 19.0606% ( 128) 00:11:12.616 10843.229 - 10902.807: 20.1960% ( 117) 00:11:12.616 10902.807 - 10962.385: 21.4480% ( 129) 00:11:12.616 10962.385 - 11021.964: 22.7970% ( 139) 00:11:12.616 11021.964 - 11081.542: 24.1460% ( 139) 00:11:12.616 11081.542 - 11141.120: 25.7473% ( 165) 00:11:12.616 11141.120 - 11200.698: 27.6689% ( 198) 00:11:12.616 11200.698 - 11260.276: 29.6099% ( 200) 00:11:12.616 11260.276 - 11319.855: 31.7644% ( 222) 00:11:12.616 11319.855 - 11379.433: 33.6762% ( 197) 00:11:12.616 11379.433 - 11439.011: 35.9763% ( 237) 00:11:12.616 11439.011 - 11498.589: 38.1114% ( 220) 00:11:12.616 11498.589 - 11558.167: 40.4503% ( 241) 00:11:12.616 11558.167 - 11617.745: 43.0124% ( 264) 00:11:12.616 11617.745 - 11677.324: 45.7104% ( 278) 00:11:12.616 11677.324 - 11736.902: 48.3016% ( 267) 00:11:12.616 11736.902 - 11796.480: 50.6502% ( 242) 00:11:12.616 11796.480 - 11856.058: 53.0862% ( 251) 00:11:12.616 11856.058 - 11915.636: 55.4251% ( 241) 00:11:12.616 11915.636 - 11975.215: 57.8707% ( 252) 00:11:12.616 11975.215 - 12034.793: 60.2484% ( 245) 00:11:12.616 12034.793 - 12094.371: 62.5970% ( 242) 00:11:12.616 12094.371 - 12153.949: 64.8195% ( 229) 00:11:12.616 12153.949 - 12213.527: 66.8866% ( 213) 00:11:12.616 12213.527 - 12273.105: 68.8665% ( 204) 00:11:12.616 12273.105 - 12332.684: 70.6910% ( 188) 00:11:12.616 12332.684 - 12392.262: 72.3797% ( 174) 00:11:12.616 12392.262 - 12451.840: 73.8548% ( 152) 00:11:12.616 12451.840 - 12511.418: 75.2329% ( 142) 00:11:12.616 12511.418 - 12570.996: 76.4266% ( 123) 00:11:12.616 12570.996 - 12630.575: 77.4942% ( 110) 00:11:12.616 12630.575 - 12690.153: 78.4841% ( 102) 00:11:12.616 12690.153 - 12749.731: 79.4740% ( 102) 00:11:12.616 12749.731 - 12809.309: 80.3280% ( 88) 00:11:12.617 12809.309 - 12868.887: 81.0753% ( 77) 00:11:12.617 12868.887 - 12928.465: 81.7741% ( 72) 00:11:12.617 12928.465 - 12988.044: 82.4049% ( 65) 00:11:12.617 12988.044 - 13047.622: 82.9581% ( 57) 00:11:12.617 13047.622 - 13107.200: 83.4433% ( 50) 00:11:12.617 13107.200 - 13166.778: 84.0062% ( 58) 00:11:12.617 13166.778 - 13226.356: 84.5497% ( 56) 00:11:12.617 13226.356 - 13285.935: 85.0738% ( 54) 00:11:12.617 13285.935 - 13345.513: 85.4523% ( 39) 00:11:12.617 13345.513 - 13405.091: 85.8307% ( 39) 00:11:12.617 13405.091 - 13464.669: 86.2384% ( 42) 00:11:12.617 13464.669 - 13524.247: 86.6168% ( 39) 00:11:12.617 13524.247 - 13583.825: 86.9565% ( 35) 00:11:12.617 13583.825 - 13643.404: 87.2865% ( 34) 00:11:12.617 13643.404 - 13702.982: 87.5776% ( 30) 00:11:12.617 13702.982 - 13762.560: 87.8494% ( 28) 00:11:12.617 13762.560 - 13822.138: 88.1017% ( 26) 00:11:12.617 13822.138 - 13881.716: 88.3152% ( 22) 00:11:12.617 13881.716 - 13941.295: 88.5384% ( 23) 00:11:12.617 13941.295 - 14000.873: 88.8199% ( 29) 00:11:12.617 14000.873 - 14060.451: 89.0334% ( 22) 00:11:12.617 14060.451 - 14120.029: 89.2081% ( 18) 00:11:12.617 14120.029 - 14179.607: 89.4022% ( 20) 00:11:12.617 14179.607 - 14239.185: 89.5477% ( 15) 00:11:12.617 14239.185 - 14298.764: 89.6836% ( 14) 00:11:12.617 14298.764 - 14358.342: 89.8389% ( 16) 00:11:12.617 14358.342 - 14417.920: 90.0621% ( 23) 00:11:12.617 14417.920 - 14477.498: 90.3047% ( 25) 00:11:12.617 14477.498 - 14537.076: 90.4600% ( 16) 00:11:12.617 14537.076 - 14596.655: 90.5959% ( 14) 00:11:12.617 14596.655 - 14656.233: 90.7026% ( 11) 00:11:12.617 14656.233 - 14715.811: 90.8191% ( 12) 00:11:12.617 14715.811 - 14775.389: 90.9161% ( 10) 00:11:12.617 14775.389 - 14834.967: 90.9938% ( 8) 00:11:12.617 14834.967 - 14894.545: 91.0617% ( 7) 00:11:12.617 14894.545 - 14954.124: 91.1297% ( 7) 00:11:12.617 14954.124 - 15013.702: 91.2073% ( 8) 00:11:12.617 15013.702 - 15073.280: 91.2752% ( 7) 00:11:12.617 15073.280 - 15132.858: 91.3529% ( 8) 00:11:12.617 15132.858 - 15192.436: 91.4208% ( 7) 00:11:12.617 15192.436 - 15252.015: 91.4693% ( 5) 00:11:12.617 15252.015 - 15371.171: 91.5664% ( 10) 00:11:12.617 15371.171 - 15490.327: 91.6440% ( 8) 00:11:12.617 15490.327 - 15609.484: 91.7217% ( 8) 00:11:12.617 15609.484 - 15728.640: 91.8284% ( 11) 00:11:12.617 15728.640 - 15847.796: 91.9255% ( 10) 00:11:12.617 15847.796 - 15966.953: 92.0419% ( 12) 00:11:12.617 15966.953 - 16086.109: 92.1584% ( 12) 00:11:12.617 16086.109 - 16205.265: 92.2943% ( 14) 00:11:12.617 16205.265 - 16324.422: 92.3719% ( 8) 00:11:12.617 16324.422 - 16443.578: 92.4398% ( 7) 00:11:12.617 16443.578 - 16562.735: 92.5175% ( 8) 00:11:12.617 16562.735 - 16681.891: 92.5757% ( 6) 00:11:12.617 16681.891 - 16801.047: 92.6436% ( 7) 00:11:12.617 16801.047 - 16920.204: 92.7310% ( 9) 00:11:12.617 16920.204 - 17039.360: 92.8474% ( 12) 00:11:12.617 17039.360 - 17158.516: 93.0415% ( 20) 00:11:12.617 17158.516 - 17277.673: 93.1968% ( 16) 00:11:12.617 17277.673 - 17396.829: 93.3230% ( 13) 00:11:12.617 17396.829 - 17515.985: 93.4006% ( 8) 00:11:12.617 17515.985 - 17635.142: 93.4783% ( 8) 00:11:12.617 17635.142 - 17754.298: 93.5559% ( 8) 00:11:12.617 17754.298 - 17873.455: 93.6335% ( 8) 00:11:12.617 17873.455 - 17992.611: 93.6821% ( 5) 00:11:12.617 17992.611 - 18111.767: 93.7500% ( 7) 00:11:12.617 18111.767 - 18230.924: 93.7888% ( 4) 00:11:12.617 18230.924 - 18350.080: 93.8082% ( 2) 00:11:12.617 18350.080 - 18469.236: 93.8373% ( 3) 00:11:12.617 18469.236 - 18588.393: 93.8665% ( 3) 00:11:12.617 18588.393 - 18707.549: 93.9247% ( 6) 00:11:12.617 18707.549 - 18826.705: 93.9829% ( 6) 00:11:12.617 18826.705 - 18945.862: 94.0509% ( 7) 00:11:12.617 18945.862 - 19065.018: 94.1091% ( 6) 00:11:12.617 19065.018 - 19184.175: 94.1770% ( 7) 00:11:12.617 19184.175 - 19303.331: 94.2450% ( 7) 00:11:12.617 19303.331 - 19422.487: 94.3129% ( 7) 00:11:12.617 19422.487 - 19541.644: 94.3711% ( 6) 00:11:12.617 19541.644 - 19660.800: 94.4293% ( 6) 00:11:12.617 19660.800 - 19779.956: 94.5943% ( 17) 00:11:12.617 19779.956 - 19899.113: 94.6914% ( 10) 00:11:12.617 19899.113 - 20018.269: 94.7884% ( 10) 00:11:12.617 20018.269 - 20137.425: 94.9049% ( 12) 00:11:12.617 20137.425 - 20256.582: 95.0214% ( 12) 00:11:12.617 20256.582 - 20375.738: 95.1669% ( 15) 00:11:12.617 20375.738 - 20494.895: 95.4387% ( 28) 00:11:12.617 20494.895 - 20614.051: 95.7201% ( 29) 00:11:12.617 20614.051 - 20733.207: 96.0210% ( 31) 00:11:12.617 20733.207 - 20852.364: 96.2830% ( 27) 00:11:12.617 20852.364 - 20971.520: 96.4771% ( 20) 00:11:12.617 20971.520 - 21090.676: 96.6324% ( 16) 00:11:12.617 21090.676 - 21209.833: 96.8071% ( 18) 00:11:12.617 21209.833 - 21328.989: 96.9526% ( 15) 00:11:12.617 21328.989 - 21448.145: 97.0982% ( 15) 00:11:12.617 21448.145 - 21567.302: 97.2535% ( 16) 00:11:12.617 21567.302 - 21686.458: 97.4185% ( 17) 00:11:12.617 21686.458 - 21805.615: 97.5349% ( 12) 00:11:12.617 21805.615 - 21924.771: 97.6514% ( 12) 00:11:12.617 21924.771 - 22043.927: 97.7873% ( 14) 00:11:12.617 22043.927 - 22163.084: 97.9037% ( 12) 00:11:12.617 22163.084 - 22282.240: 98.0202% ( 12) 00:11:12.617 22282.240 - 22401.396: 98.1561% ( 14) 00:11:12.617 22401.396 - 22520.553: 98.2822% ( 13) 00:11:12.617 22520.553 - 22639.709: 98.4181% ( 14) 00:11:12.617 22639.709 - 22758.865: 98.5637% ( 15) 00:11:12.617 22758.865 - 22878.022: 98.6801% ( 12) 00:11:12.617 22878.022 - 22997.178: 98.8063% ( 13) 00:11:12.617 22997.178 - 23116.335: 98.9422% ( 14) 00:11:12.617 23116.335 - 23235.491: 99.0683% ( 13) 00:11:12.617 23235.491 - 23354.647: 99.2042% ( 14) 00:11:12.617 23354.647 - 23473.804: 99.2915% ( 9) 00:11:12.617 23473.804 - 23592.960: 99.3692% ( 8) 00:11:12.617 23592.960 - 23712.116: 99.3789% ( 1) 00:11:12.617 25856.931 - 25976.087: 99.3983% ( 2) 00:11:12.617 25976.087 - 26095.244: 99.4274% ( 3) 00:11:12.617 26095.244 - 26214.400: 99.4565% ( 3) 00:11:12.617 26214.400 - 26333.556: 99.4953% ( 4) 00:11:12.617 26333.556 - 26452.713: 99.5245% ( 3) 00:11:12.617 26452.713 - 26571.869: 99.5536% ( 3) 00:11:12.617 26571.869 - 26691.025: 99.5827% ( 3) 00:11:12.617 26691.025 - 26810.182: 99.6215% ( 4) 00:11:12.617 26810.182 - 26929.338: 99.6506% ( 3) 00:11:12.617 26929.338 - 27048.495: 99.6894% ( 4) 00:11:12.617 27048.495 - 27167.651: 99.7186% ( 3) 00:11:12.617 27167.651 - 27286.807: 99.7477% ( 3) 00:11:12.617 27286.807 - 27405.964: 99.7865% ( 4) 00:11:12.617 27405.964 - 27525.120: 99.8156% ( 3) 00:11:12.617 27525.120 - 27644.276: 99.8544% ( 4) 00:11:12.617 27644.276 - 27763.433: 99.8835% ( 3) 00:11:12.617 27763.433 - 27882.589: 99.9224% ( 4) 00:11:12.617 27882.589 - 28001.745: 99.9515% ( 3) 00:11:12.617 28001.745 - 28120.902: 99.9806% ( 3) 00:11:12.617 28120.902 - 28240.058: 100.0000% ( 2) 00:11:12.617 00:11:12.617 11:19:34 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:12.617 00:11:12.617 real 0m2.767s 00:11:12.617 user 0m2.329s 00:11:12.617 sys 0m0.314s 00:11:12.617 11:19:34 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.617 11:19:34 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:12.617 ************************************ 00:11:12.617 END TEST nvme_perf 00:11:12.617 ************************************ 00:11:12.617 11:19:34 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:12.617 11:19:34 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.617 11:19:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.617 11:19:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:12.617 ************************************ 00:11:12.617 START TEST nvme_hello_world 00:11:12.617 ************************************ 00:11:12.617 11:19:34 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:12.876 Initializing NVMe Controllers 00:11:12.876 Attached to 0000:00:10.0 00:11:12.876 Namespace ID: 1 size: 6GB 00:11:12.876 Attached to 0000:00:11.0 00:11:12.876 Namespace ID: 1 size: 5GB 00:11:12.876 Attached to 0000:00:13.0 00:11:12.876 Namespace ID: 1 size: 1GB 00:11:12.876 Attached to 0000:00:12.0 00:11:12.876 Namespace ID: 1 size: 4GB 00:11:12.876 Namespace ID: 2 size: 4GB 00:11:12.876 Namespace ID: 3 size: 4GB 00:11:12.876 Initialization complete. 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 INFO: using host memory buffer for IO 00:11:12.876 Hello world! 00:11:12.876 00:11:12.876 real 0m0.344s 00:11:12.876 user 0m0.149s 00:11:12.876 sys 0m0.150s 00:11:12.876 ************************************ 00:11:12.876 END TEST nvme_hello_world 00:11:12.876 ************************************ 00:11:12.876 11:19:35 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.876 11:19:35 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:13.134 11:19:35 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:13.134 11:19:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.134 11:19:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.134 11:19:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.134 ************************************ 00:11:13.134 START TEST nvme_sgl 00:11:13.134 ************************************ 00:11:13.134 11:19:35 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:13.392 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:13.392 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:13.392 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:13.392 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:13.392 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:13.392 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:13.392 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:13.392 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:13.392 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:13.392 NVMe Readv/Writev Request test 00:11:13.392 Attached to 0000:00:10.0 00:11:13.392 Attached to 0000:00:11.0 00:11:13.392 Attached to 0000:00:13.0 00:11:13.392 Attached to 0000:00:12.0 00:11:13.392 0000:00:10.0: build_io_request_2 test passed 00:11:13.392 0000:00:10.0: build_io_request_4 test passed 00:11:13.392 0000:00:10.0: build_io_request_5 test passed 00:11:13.392 0000:00:10.0: build_io_request_6 test passed 00:11:13.392 0000:00:10.0: build_io_request_7 test passed 00:11:13.392 0000:00:10.0: build_io_request_10 test passed 00:11:13.392 0000:00:11.0: build_io_request_2 test passed 00:11:13.392 0000:00:11.0: build_io_request_4 test passed 00:11:13.392 0000:00:11.0: build_io_request_5 test passed 00:11:13.392 0000:00:11.0: build_io_request_6 test passed 00:11:13.392 0000:00:11.0: build_io_request_7 test passed 00:11:13.392 0000:00:11.0: build_io_request_10 test passed 00:11:13.392 Cleaning up... 00:11:13.392 00:11:13.392 real 0m0.464s 00:11:13.392 user 0m0.248s 00:11:13.392 sys 0m0.161s 00:11:13.392 ************************************ 00:11:13.392 END TEST nvme_sgl 00:11:13.392 ************************************ 00:11:13.392 11:19:35 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.392 11:19:35 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:13.651 11:19:35 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:13.651 11:19:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.651 11:19:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.651 11:19:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.651 ************************************ 00:11:13.651 START TEST nvme_e2edp 00:11:13.651 ************************************ 00:11:13.651 11:19:35 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:13.909 NVMe Write/Read with End-to-End data protection test 00:11:13.909 Attached to 0000:00:10.0 00:11:13.909 Attached to 0000:00:11.0 00:11:13.909 Attached to 0000:00:13.0 00:11:13.909 Attached to 0000:00:12.0 00:11:13.909 Cleaning up... 00:11:13.909 ************************************ 00:11:13.909 END TEST nvme_e2edp 00:11:13.909 ************************************ 00:11:13.909 00:11:13.909 real 0m0.327s 00:11:13.909 user 0m0.128s 00:11:13.909 sys 0m0.147s 00:11:13.909 11:19:35 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.909 11:19:35 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:13.909 11:19:35 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:13.909 11:19:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.909 11:19:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.909 11:19:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.909 ************************************ 00:11:13.909 START TEST nvme_reserve 00:11:13.909 ************************************ 00:11:13.909 11:19:35 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:14.168 ===================================================== 00:11:14.168 NVMe Controller at PCI bus 0, device 16, function 0 00:11:14.168 ===================================================== 00:11:14.168 Reservations: Not Supported 00:11:14.168 ===================================================== 00:11:14.168 NVMe Controller at PCI bus 0, device 17, function 0 00:11:14.168 ===================================================== 00:11:14.168 Reservations: Not Supported 00:11:14.168 ===================================================== 00:11:14.168 NVMe Controller at PCI bus 0, device 19, function 0 00:11:14.168 ===================================================== 00:11:14.168 Reservations: Not Supported 00:11:14.168 ===================================================== 00:11:14.168 NVMe Controller at PCI bus 0, device 18, function 0 00:11:14.168 ===================================================== 00:11:14.168 Reservations: Not Supported 00:11:14.168 Reservation test passed 00:11:14.168 00:11:14.168 real 0m0.337s 00:11:14.168 user 0m0.131s 00:11:14.168 sys 0m0.157s 00:11:14.168 11:19:36 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.168 11:19:36 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 ************************************ 00:11:14.168 END TEST nvme_reserve 00:11:14.168 ************************************ 00:11:14.426 11:19:36 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:14.426 11:19:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.426 11:19:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.426 11:19:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.426 ************************************ 00:11:14.426 START TEST nvme_err_injection 00:11:14.426 ************************************ 00:11:14.426 11:19:36 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:14.692 NVMe Error Injection test 00:11:14.692 Attached to 0000:00:10.0 00:11:14.692 Attached to 0000:00:11.0 00:11:14.692 Attached to 0000:00:13.0 00:11:14.692 Attached to 0000:00:12.0 00:11:14.692 0000:00:12.0: get features failed as expected 00:11:14.692 0000:00:10.0: get features failed as expected 00:11:14.692 0000:00:11.0: get features failed as expected 00:11:14.692 0000:00:13.0: get features failed as expected 00:11:14.692 0000:00:10.0: get features successfully as expected 00:11:14.692 0000:00:11.0: get features successfully as expected 00:11:14.692 0000:00:13.0: get features successfully as expected 00:11:14.692 0000:00:12.0: get features successfully as expected 00:11:14.692 0000:00:10.0: read failed as expected 00:11:14.692 0000:00:11.0: read failed as expected 00:11:14.692 0000:00:13.0: read failed as expected 00:11:14.692 0000:00:12.0: read failed as expected 00:11:14.692 0000:00:10.0: read successfully as expected 00:11:14.692 0000:00:11.0: read successfully as expected 00:11:14.692 0000:00:13.0: read successfully as expected 00:11:14.692 0000:00:12.0: read successfully as expected 00:11:14.692 Cleaning up... 00:11:14.692 00:11:14.692 real 0m0.315s 00:11:14.692 user 0m0.124s 00:11:14.692 sys 0m0.143s 00:11:14.692 ************************************ 00:11:14.692 END TEST nvme_err_injection 00:11:14.692 ************************************ 00:11:14.692 11:19:36 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.692 11:19:36 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:14.692 11:19:36 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:14.692 11:19:36 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:14.692 11:19:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.692 11:19:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.692 ************************************ 00:11:14.692 START TEST nvme_overhead 00:11:14.692 ************************************ 00:11:14.692 11:19:36 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:16.069 Initializing NVMe Controllers 00:11:16.069 Attached to 0000:00:10.0 00:11:16.069 Attached to 0000:00:11.0 00:11:16.069 Attached to 0000:00:13.0 00:11:16.069 Attached to 0000:00:12.0 00:11:16.069 Initialization complete. Launching workers. 00:11:16.069 submit (in ns) avg, min, max = 17051.5, 13301.8, 99474.5 00:11:16.069 complete (in ns) avg, min, max = 11513.1, 9557.3, 78290.0 00:11:16.069 00:11:16.069 Submit histogram 00:11:16.069 ================ 00:11:16.069 Range in us Cumulative Count 00:11:16.069 13.265 - 13.324: 0.0087% ( 1) 00:11:16.069 13.324 - 13.382: 0.0174% ( 1) 00:11:16.069 14.371 - 14.429: 0.0434% ( 3) 00:11:16.069 14.429 - 14.487: 0.0608% ( 2) 00:11:16.069 14.487 - 14.545: 0.1216% ( 7) 00:11:16.069 14.545 - 14.604: 0.3475% ( 26) 00:11:16.069 14.604 - 14.662: 1.0685% ( 83) 00:11:16.069 14.662 - 14.720: 2.8405% ( 204) 00:11:16.069 14.720 - 14.778: 6.8190% ( 458) 00:11:16.069 14.778 - 14.836: 13.2905% ( 745) 00:11:16.069 14.836 - 14.895: 20.6915% ( 852) 00:11:16.069 14.895 - 15.011: 36.2752% ( 1794) 00:11:16.069 15.011 - 15.127: 48.2887% ( 1383) 00:11:16.069 15.127 - 15.244: 55.4291% ( 822) 00:11:16.069 15.244 - 15.360: 59.3728% ( 454) 00:11:16.069 15.360 - 15.476: 61.6835% ( 266) 00:11:16.069 15.476 - 15.593: 63.7074% ( 233) 00:11:16.069 15.593 - 15.709: 65.5490% ( 212) 00:11:16.069 15.709 - 15.825: 67.3819% ( 211) 00:11:16.069 15.825 - 15.942: 69.1018% ( 198) 00:11:16.069 15.942 - 16.058: 70.6133% ( 174) 00:11:16.069 16.058 - 16.175: 71.8641% ( 144) 00:11:16.069 16.175 - 16.291: 72.9326% ( 123) 00:11:16.069 16.291 - 16.407: 73.9142% ( 113) 00:11:16.069 16.407 - 16.524: 74.7133% ( 92) 00:11:16.069 16.524 - 16.640: 75.4778% ( 88) 00:11:16.069 16.640 - 16.756: 75.9642% ( 56) 00:11:16.069 16.756 - 16.873: 76.3290% ( 42) 00:11:16.069 16.873 - 16.989: 76.7026% ( 43) 00:11:16.069 16.989 - 17.105: 76.9805% ( 32) 00:11:16.069 17.105 - 17.222: 77.3367% ( 41) 00:11:16.069 17.222 - 17.338: 77.5104% ( 20) 00:11:16.069 17.338 - 17.455: 77.6755% ( 19) 00:11:16.069 17.455 - 17.571: 77.7797% ( 12) 00:11:16.069 17.571 - 17.687: 77.8579% ( 9) 00:11:16.069 17.687 - 17.804: 78.0142% ( 18) 00:11:16.069 17.804 - 17.920: 78.1272% ( 13) 00:11:16.069 17.920 - 18.036: 78.2575% ( 15) 00:11:16.069 18.036 - 18.153: 78.3443% ( 10) 00:11:16.069 18.153 - 18.269: 78.4138% ( 8) 00:11:16.069 18.269 - 18.385: 78.4746% ( 7) 00:11:16.069 18.385 - 18.502: 78.5268% ( 6) 00:11:16.069 18.502 - 18.618: 78.5789% ( 6) 00:11:16.069 18.618 - 18.735: 78.7179% ( 16) 00:11:16.069 18.735 - 18.851: 78.7700% ( 6) 00:11:16.069 18.851 - 18.967: 78.8655% ( 11) 00:11:16.069 18.967 - 19.084: 78.9263% ( 7) 00:11:16.069 19.084 - 19.200: 79.0306% ( 12) 00:11:16.069 19.200 - 19.316: 79.1088% ( 9) 00:11:16.069 19.316 - 19.433: 79.1696% ( 7) 00:11:16.069 19.433 - 19.549: 79.2912% ( 14) 00:11:16.069 19.549 - 19.665: 79.3867% ( 11) 00:11:16.069 19.665 - 19.782: 79.4562% ( 8) 00:11:16.069 19.782 - 19.898: 79.5605% ( 12) 00:11:16.069 19.898 - 20.015: 79.6560% ( 11) 00:11:16.069 20.015 - 20.131: 79.7603% ( 12) 00:11:16.069 20.131 - 20.247: 79.8819% ( 14) 00:11:16.069 20.247 - 20.364: 80.0556% ( 20) 00:11:16.069 20.364 - 20.480: 80.2206% ( 19) 00:11:16.069 20.480 - 20.596: 80.4204% ( 23) 00:11:16.069 20.596 - 20.713: 80.6897% ( 31) 00:11:16.069 20.713 - 20.829: 80.9937% ( 35) 00:11:16.069 20.829 - 20.945: 81.3325% ( 39) 00:11:16.069 20.945 - 21.062: 81.8277% ( 57) 00:11:16.069 21.062 - 21.178: 82.2186% ( 45) 00:11:16.069 21.178 - 21.295: 82.6616% ( 51) 00:11:16.069 21.295 - 21.411: 83.3912% ( 84) 00:11:16.069 21.411 - 21.527: 83.9732% ( 67) 00:11:16.069 21.527 - 21.644: 84.7290% ( 87) 00:11:16.069 21.644 - 21.760: 85.2675% ( 62) 00:11:16.069 21.760 - 21.876: 85.9451% ( 78) 00:11:16.069 21.876 - 21.993: 86.6140% ( 77) 00:11:16.069 21.993 - 22.109: 87.0830% ( 54) 00:11:16.069 22.109 - 22.225: 87.6477% ( 65) 00:11:16.069 22.225 - 22.342: 88.2123% ( 65) 00:11:16.069 22.342 - 22.458: 88.6466% ( 50) 00:11:16.069 22.458 - 22.575: 89.1244% ( 55) 00:11:16.069 22.575 - 22.691: 89.5327% ( 47) 00:11:16.069 22.691 - 22.807: 89.9149% ( 44) 00:11:16.069 22.807 - 22.924: 90.2102% ( 34) 00:11:16.069 22.924 - 23.040: 90.6445% ( 50) 00:11:16.069 23.040 - 23.156: 91.0615% ( 48) 00:11:16.069 23.156 - 23.273: 91.3829% ( 37) 00:11:16.069 23.273 - 23.389: 91.6869% ( 35) 00:11:16.069 23.389 - 23.505: 92.1126% ( 49) 00:11:16.069 23.505 - 23.622: 92.4166% ( 35) 00:11:16.069 23.622 - 23.738: 92.6946% ( 32) 00:11:16.069 23.738 - 23.855: 93.0420% ( 40) 00:11:16.069 23.855 - 23.971: 93.3895% ( 40) 00:11:16.069 23.971 - 24.087: 93.6414% ( 29) 00:11:16.069 24.087 - 24.204: 93.8846% ( 28) 00:11:16.069 24.204 - 24.320: 94.1800% ( 34) 00:11:16.069 24.320 - 24.436: 94.3885% ( 24) 00:11:16.069 24.436 - 24.553: 94.6404% ( 29) 00:11:16.069 24.553 - 24.669: 94.9183% ( 32) 00:11:16.069 24.669 - 24.785: 95.0573% ( 16) 00:11:16.069 24.785 - 24.902: 95.3266% ( 31) 00:11:16.069 24.902 - 25.018: 95.6046% ( 32) 00:11:16.069 25.018 - 25.135: 95.8391% ( 27) 00:11:16.069 25.135 - 25.251: 95.9694% ( 15) 00:11:16.069 25.251 - 25.367: 96.1605% ( 22) 00:11:16.069 25.367 - 25.484: 96.3343% ( 20) 00:11:16.069 25.484 - 25.600: 96.4906% ( 18) 00:11:16.069 25.600 - 25.716: 96.6557% ( 19) 00:11:16.069 25.716 - 25.833: 96.8728% ( 25) 00:11:16.069 25.833 - 25.949: 96.9944% ( 14) 00:11:16.069 25.949 - 26.065: 97.1682% ( 20) 00:11:16.069 26.065 - 26.182: 97.3680% ( 23) 00:11:16.069 26.182 - 26.298: 97.4983% ( 15) 00:11:16.069 26.298 - 26.415: 97.6459% ( 17) 00:11:16.069 26.415 - 26.531: 97.7849% ( 16) 00:11:16.069 26.531 - 26.647: 97.8805% ( 11) 00:11:16.069 26.647 - 26.764: 97.9500% ( 8) 00:11:16.069 26.764 - 26.880: 98.1063% ( 18) 00:11:16.069 26.880 - 26.996: 98.2019% ( 11) 00:11:16.069 27.113 - 27.229: 98.2540% ( 6) 00:11:16.069 27.229 - 27.345: 98.3061% ( 6) 00:11:16.069 27.345 - 27.462: 98.3495% ( 5) 00:11:16.069 27.462 - 27.578: 98.3843% ( 4) 00:11:16.069 27.578 - 27.695: 98.4190% ( 4) 00:11:16.069 27.695 - 27.811: 98.4712% ( 6) 00:11:16.069 27.811 - 27.927: 98.4972% ( 3) 00:11:16.069 27.927 - 28.044: 98.5146% ( 2) 00:11:16.069 28.044 - 28.160: 98.5754% ( 7) 00:11:16.069 28.160 - 28.276: 98.5928% ( 2) 00:11:16.069 28.276 - 28.393: 98.6101% ( 2) 00:11:16.069 28.393 - 28.509: 98.6536% ( 5) 00:11:16.070 28.509 - 28.625: 98.6796% ( 3) 00:11:16.070 28.625 - 28.742: 98.6883% ( 1) 00:11:16.070 28.742 - 28.858: 98.7144% ( 3) 00:11:16.070 28.858 - 28.975: 98.7404% ( 3) 00:11:16.070 28.975 - 29.091: 98.7665% ( 3) 00:11:16.070 29.091 - 29.207: 98.7839% ( 2) 00:11:16.070 29.207 - 29.324: 98.7926% ( 1) 00:11:16.070 29.324 - 29.440: 98.8013% ( 1) 00:11:16.070 29.673 - 29.789: 98.8186% ( 2) 00:11:16.070 29.789 - 30.022: 98.8360% ( 2) 00:11:16.070 30.255 - 30.487: 98.8534% ( 2) 00:11:16.070 30.487 - 30.720: 98.8707% ( 2) 00:11:16.070 30.720 - 30.953: 98.9055% ( 4) 00:11:16.070 30.953 - 31.185: 98.9315% ( 3) 00:11:16.070 31.185 - 31.418: 98.9489% ( 2) 00:11:16.070 31.418 - 31.651: 99.0010% ( 6) 00:11:16.070 31.651 - 31.884: 99.0271% ( 3) 00:11:16.070 31.884 - 32.116: 99.0792% ( 6) 00:11:16.070 32.116 - 32.349: 99.1053% ( 3) 00:11:16.070 32.349 - 32.582: 99.1140% ( 1) 00:11:16.070 32.582 - 32.815: 99.1227% ( 1) 00:11:16.070 32.815 - 33.047: 99.1400% ( 2) 00:11:16.070 33.047 - 33.280: 99.1835% ( 5) 00:11:16.070 33.280 - 33.513: 99.1921% ( 1) 00:11:16.070 33.513 - 33.745: 99.2269% ( 4) 00:11:16.070 33.745 - 33.978: 99.2616% ( 4) 00:11:16.070 33.978 - 34.211: 99.2703% ( 1) 00:11:16.070 34.211 - 34.444: 99.2964% ( 3) 00:11:16.070 34.444 - 34.676: 99.3051% ( 1) 00:11:16.070 34.676 - 34.909: 99.3398% ( 4) 00:11:16.070 34.909 - 35.142: 99.3746% ( 4) 00:11:16.070 35.142 - 35.375: 99.4006% ( 3) 00:11:16.070 35.375 - 35.607: 99.4354% ( 4) 00:11:16.070 35.607 - 35.840: 99.4527% ( 2) 00:11:16.070 35.840 - 36.073: 99.4614% ( 1) 00:11:16.070 36.073 - 36.305: 99.4875% ( 3) 00:11:16.070 36.305 - 36.538: 99.5136% ( 3) 00:11:16.070 36.538 - 36.771: 99.5396% ( 3) 00:11:16.070 36.771 - 37.004: 99.5570% ( 2) 00:11:16.070 37.004 - 37.236: 99.5917% ( 4) 00:11:16.070 37.236 - 37.469: 99.6004% ( 1) 00:11:16.070 37.469 - 37.702: 99.6178% ( 2) 00:11:16.070 37.702 - 37.935: 99.6265% ( 1) 00:11:16.070 37.935 - 38.167: 99.6352% ( 1) 00:11:16.070 38.167 - 38.400: 99.6699% ( 4) 00:11:16.070 38.400 - 38.633: 99.6786% ( 1) 00:11:16.070 38.633 - 38.865: 99.6873% ( 1) 00:11:16.070 38.865 - 39.098: 99.6960% ( 1) 00:11:16.070 39.331 - 39.564: 99.7133% ( 2) 00:11:16.070 39.564 - 39.796: 99.7307% ( 2) 00:11:16.070 39.796 - 40.029: 99.7741% ( 5) 00:11:16.070 40.029 - 40.262: 99.7915% ( 2) 00:11:16.070 41.193 - 41.425: 99.8089% ( 2) 00:11:16.070 41.425 - 41.658: 99.8263% ( 2) 00:11:16.070 41.658 - 41.891: 99.8350% ( 1) 00:11:16.070 42.124 - 42.356: 99.8436% ( 1) 00:11:16.070 42.356 - 42.589: 99.8523% ( 1) 00:11:16.070 42.589 - 42.822: 99.8610% ( 1) 00:11:16.070 43.520 - 43.753: 99.8697% ( 1) 00:11:16.070 43.753 - 43.985: 99.8784% ( 1) 00:11:16.070 44.451 - 44.684: 99.8871% ( 1) 00:11:16.070 44.684 - 44.916: 99.9044% ( 2) 00:11:16.070 45.382 - 45.615: 99.9131% ( 1) 00:11:16.070 46.080 - 46.313: 99.9218% ( 1) 00:11:16.070 47.709 - 47.942: 99.9305% ( 1) 00:11:16.070 48.407 - 48.640: 99.9392% ( 1) 00:11:16.070 52.131 - 52.364: 99.9479% ( 1) 00:11:16.070 53.993 - 54.225: 99.9566% ( 1) 00:11:16.070 55.389 - 55.622: 99.9653% ( 1) 00:11:16.070 58.415 - 58.647: 99.9739% ( 1) 00:11:16.070 69.818 - 70.284: 99.9826% ( 1) 00:11:16.070 73.076 - 73.542: 99.9913% ( 1) 00:11:16.070 99.142 - 99.607: 100.0000% ( 1) 00:11:16.070 00:11:16.070 Complete histogram 00:11:16.070 ================== 00:11:16.070 Range in us Cumulative Count 00:11:16.070 9.542 - 9.600: 0.0434% ( 5) 00:11:16.070 9.600 - 9.658: 0.7036% ( 76) 00:11:16.070 9.658 - 9.716: 3.2662% ( 295) 00:11:16.070 9.716 - 9.775: 9.5639% ( 725) 00:11:16.070 9.775 - 9.833: 19.6491% ( 1161) 00:11:16.070 9.833 - 9.891: 31.9406% ( 1415) 00:11:16.070 9.891 - 9.949: 43.4503% ( 1325) 00:11:16.070 9.949 - 10.007: 52.2238% ( 1010) 00:11:16.070 10.007 - 10.065: 57.9917% ( 664) 00:11:16.070 10.065 - 10.124: 61.9614% ( 457) 00:11:16.070 10.124 - 10.182: 64.5761% ( 301) 00:11:16.070 10.182 - 10.240: 65.8183% ( 143) 00:11:16.070 10.240 - 10.298: 66.6174% ( 92) 00:11:16.070 10.298 - 10.356: 67.2081% ( 68) 00:11:16.070 10.356 - 10.415: 67.6164% ( 47) 00:11:16.070 10.415 - 10.473: 67.8857% ( 31) 00:11:16.070 10.473 - 10.531: 68.1202% ( 27) 00:11:16.070 10.531 - 10.589: 68.4243% ( 35) 00:11:16.070 10.589 - 10.647: 68.8151% ( 45) 00:11:16.070 10.647 - 10.705: 69.1105% ( 34) 00:11:16.070 10.705 - 10.764: 69.3537% ( 28) 00:11:16.070 10.764 - 10.822: 69.7359% ( 44) 00:11:16.070 10.822 - 10.880: 70.1876% ( 52) 00:11:16.070 10.880 - 10.938: 70.7523% ( 65) 00:11:16.070 10.938 - 10.996: 71.4385% ( 79) 00:11:16.070 10.996 - 11.055: 71.9336% ( 57) 00:11:16.070 11.055 - 11.113: 72.6981% ( 88) 00:11:16.070 11.113 - 11.171: 73.2714% ( 66) 00:11:16.070 11.171 - 11.229: 73.7318% ( 53) 00:11:16.070 11.229 - 11.287: 74.2182% ( 56) 00:11:16.070 11.287 - 11.345: 74.5222% ( 35) 00:11:16.070 11.345 - 11.404: 74.7915% ( 31) 00:11:16.070 11.404 - 11.462: 75.0869% ( 34) 00:11:16.070 11.462 - 11.520: 75.2780% ( 22) 00:11:16.070 11.520 - 11.578: 75.4864% ( 24) 00:11:16.070 11.578 - 11.636: 75.7123% ( 26) 00:11:16.070 11.636 - 11.695: 75.9034% ( 22) 00:11:16.070 11.695 - 11.753: 76.1119% ( 24) 00:11:16.070 11.753 - 11.811: 76.2856% ( 20) 00:11:16.070 11.811 - 11.869: 76.4333% ( 17) 00:11:16.070 11.869 - 11.927: 76.5810% ( 17) 00:11:16.070 11.927 - 11.985: 76.7373% ( 18) 00:11:16.070 11.985 - 12.044: 76.8416% ( 12) 00:11:16.070 12.044 - 12.102: 77.0153% ( 20) 00:11:16.070 12.102 - 12.160: 77.3106% ( 34) 00:11:16.070 12.160 - 12.218: 77.5712% ( 30) 00:11:16.070 12.218 - 12.276: 77.8579% ( 33) 00:11:16.070 12.276 - 12.335: 78.0664% ( 24) 00:11:16.070 12.335 - 12.393: 78.3791% ( 36) 00:11:16.070 12.393 - 12.451: 78.6484% ( 31) 00:11:16.070 12.451 - 12.509: 78.7960% ( 17) 00:11:16.070 12.509 - 12.567: 78.9958% ( 23) 00:11:16.070 12.567 - 12.625: 79.1348% ( 16) 00:11:16.070 12.625 - 12.684: 79.2477% ( 13) 00:11:16.070 12.684 - 12.742: 79.4649% ( 25) 00:11:16.070 12.742 - 12.800: 79.5952% ( 15) 00:11:16.070 12.800 - 12.858: 79.6908% ( 11) 00:11:16.070 12.858 - 12.916: 79.7776% ( 10) 00:11:16.070 12.916 - 12.975: 79.8819% ( 12) 00:11:16.070 12.975 - 13.033: 80.0035% ( 14) 00:11:16.070 13.033 - 13.091: 80.0643% ( 7) 00:11:16.070 13.091 - 13.149: 80.1251% ( 7) 00:11:16.070 13.149 - 13.207: 80.2033% ( 9) 00:11:16.070 13.207 - 13.265: 80.2467% ( 5) 00:11:16.070 13.265 - 13.324: 80.3596% ( 13) 00:11:16.070 13.324 - 13.382: 80.4117% ( 6) 00:11:16.070 13.382 - 13.440: 80.4639% ( 6) 00:11:16.070 13.440 - 13.498: 80.5247% ( 7) 00:11:16.070 13.498 - 13.556: 80.5855% ( 7) 00:11:16.070 13.556 - 13.615: 80.6637% ( 9) 00:11:16.070 13.615 - 13.673: 80.7505% ( 10) 00:11:16.070 13.673 - 13.731: 80.8548% ( 12) 00:11:16.070 13.731 - 13.789: 80.9416% ( 10) 00:11:16.070 13.789 - 13.847: 81.0546% ( 13) 00:11:16.070 13.847 - 13.905: 81.1501% ( 11) 00:11:16.070 13.905 - 13.964: 81.2457% ( 11) 00:11:16.070 13.964 - 14.022: 81.3760% ( 15) 00:11:16.070 14.022 - 14.080: 81.4802% ( 12) 00:11:16.070 14.080 - 14.138: 81.6539% ( 20) 00:11:16.070 14.138 - 14.196: 81.7582% ( 12) 00:11:16.070 14.196 - 14.255: 81.9840% ( 26) 00:11:16.070 14.255 - 14.313: 82.1056% ( 14) 00:11:16.070 14.313 - 14.371: 82.2794% ( 20) 00:11:16.070 14.371 - 14.429: 82.5052% ( 26) 00:11:16.070 14.429 - 14.487: 82.7224% ( 25) 00:11:16.070 14.487 - 14.545: 82.9656% ( 28) 00:11:16.070 14.545 - 14.604: 83.1741% ( 24) 00:11:16.070 14.604 - 14.662: 83.4086% ( 27) 00:11:16.070 14.662 - 14.720: 83.6605% ( 29) 00:11:16.070 14.720 - 14.778: 83.9646% ( 35) 00:11:16.070 14.778 - 14.836: 84.2425% ( 32) 00:11:16.070 14.836 - 14.895: 84.4771% ( 27) 00:11:16.070 14.895 - 15.011: 85.0504% ( 66) 00:11:16.070 15.011 - 15.127: 85.5455% ( 57) 00:11:16.070 15.127 - 15.244: 86.0580% ( 59) 00:11:16.070 15.244 - 15.360: 86.6053% ( 63) 00:11:16.070 15.360 - 15.476: 87.2828% ( 78) 00:11:16.070 15.476 - 15.593: 87.8301% ( 63) 00:11:16.070 15.593 - 15.709: 88.4382% ( 70) 00:11:16.070 15.709 - 15.825: 88.9246% ( 56) 00:11:16.070 15.825 - 15.942: 89.3416% ( 48) 00:11:16.070 15.942 - 16.058: 89.7846% ( 51) 00:11:16.070 16.058 - 16.175: 90.3753% ( 68) 00:11:16.070 16.175 - 16.291: 90.8356% ( 53) 00:11:16.070 16.291 - 16.407: 91.4003% ( 65) 00:11:16.070 16.407 - 16.524: 91.9388% ( 62) 00:11:16.070 16.524 - 16.640: 92.4166% ( 55) 00:11:16.070 16.640 - 16.756: 92.7814% ( 42) 00:11:16.070 16.756 - 16.873: 93.1897% ( 47) 00:11:16.070 16.873 - 16.989: 93.5111% ( 37) 00:11:16.070 16.989 - 17.105: 93.8499% ( 39) 00:11:16.070 17.105 - 17.222: 94.1105% ( 30) 00:11:16.070 17.222 - 17.338: 94.4319% ( 37) 00:11:16.070 17.338 - 17.455: 94.7012% ( 31) 00:11:16.070 17.455 - 17.571: 94.8923% ( 22) 00:11:16.071 17.571 - 17.687: 95.1008% ( 24) 00:11:16.071 17.687 - 17.804: 95.2919% ( 22) 00:11:16.071 17.804 - 17.920: 95.4482% ( 18) 00:11:16.071 17.920 - 18.036: 95.6915% ( 28) 00:11:16.071 18.036 - 18.153: 95.8391% ( 17) 00:11:16.071 18.153 - 18.269: 96.0389% ( 23) 00:11:16.071 18.269 - 18.385: 96.2387% ( 23) 00:11:16.071 18.385 - 18.502: 96.4472% ( 24) 00:11:16.071 18.502 - 18.618: 96.6209% ( 20) 00:11:16.071 18.618 - 18.735: 96.7599% ( 16) 00:11:16.071 18.735 - 18.851: 96.8728% ( 13) 00:11:16.071 18.851 - 18.967: 97.0118% ( 16) 00:11:16.071 18.967 - 19.084: 97.1421% ( 15) 00:11:16.071 19.084 - 19.200: 97.2637% ( 14) 00:11:16.071 19.200 - 19.316: 97.3680% ( 12) 00:11:16.071 19.316 - 19.433: 97.4114% ( 5) 00:11:16.071 19.433 - 19.549: 97.5156% ( 12) 00:11:16.071 19.549 - 19.665: 97.5938% ( 9) 00:11:16.071 19.665 - 19.782: 97.7154% ( 14) 00:11:16.071 19.782 - 19.898: 97.7762% ( 7) 00:11:16.071 19.898 - 20.015: 97.8544% ( 9) 00:11:16.071 20.015 - 20.131: 97.9847% ( 15) 00:11:16.071 20.131 - 20.247: 98.1150% ( 15) 00:11:16.071 20.247 - 20.364: 98.1671% ( 6) 00:11:16.071 20.364 - 20.480: 98.2453% ( 9) 00:11:16.071 20.480 - 20.596: 98.3322% ( 10) 00:11:16.071 20.596 - 20.713: 98.4277% ( 11) 00:11:16.071 20.713 - 20.829: 98.4972% ( 8) 00:11:16.071 20.829 - 20.945: 98.5493% ( 6) 00:11:16.071 20.945 - 21.062: 98.6188% ( 8) 00:11:16.071 21.062 - 21.178: 98.6623% ( 5) 00:11:16.071 21.178 - 21.295: 98.6883% ( 3) 00:11:16.071 21.295 - 21.411: 98.7404% ( 6) 00:11:16.071 21.411 - 21.527: 98.7578% ( 2) 00:11:16.071 21.527 - 21.644: 98.8360% ( 9) 00:11:16.071 21.644 - 21.760: 98.8794% ( 5) 00:11:16.071 21.760 - 21.876: 98.9315% ( 6) 00:11:16.071 21.876 - 21.993: 98.9489% ( 2) 00:11:16.071 21.993 - 22.109: 98.9663% ( 2) 00:11:16.071 22.109 - 22.225: 99.0010% ( 4) 00:11:16.071 22.225 - 22.342: 99.0097% ( 1) 00:11:16.071 22.342 - 22.458: 99.0184% ( 1) 00:11:16.071 22.575 - 22.691: 99.0532% ( 4) 00:11:16.071 22.691 - 22.807: 99.0705% ( 2) 00:11:16.071 22.924 - 23.040: 99.0966% ( 3) 00:11:16.071 23.040 - 23.156: 99.1140% ( 2) 00:11:16.071 23.156 - 23.273: 99.1313% ( 2) 00:11:16.071 23.273 - 23.389: 99.1400% ( 1) 00:11:16.071 23.389 - 23.505: 99.1748% ( 4) 00:11:16.071 23.505 - 23.622: 99.2008% ( 3) 00:11:16.071 23.622 - 23.738: 99.2182% ( 2) 00:11:16.071 23.738 - 23.855: 99.2269% ( 1) 00:11:16.071 23.855 - 23.971: 99.2356% ( 1) 00:11:16.071 23.971 - 24.087: 99.2530% ( 2) 00:11:16.071 24.087 - 24.204: 99.2790% ( 3) 00:11:16.071 24.204 - 24.320: 99.2877% ( 1) 00:11:16.071 24.436 - 24.553: 99.2964% ( 1) 00:11:16.071 24.553 - 24.669: 99.3051% ( 1) 00:11:16.071 24.669 - 24.785: 99.3138% ( 1) 00:11:16.071 24.785 - 24.902: 99.3224% ( 1) 00:11:16.071 25.135 - 25.251: 99.3572% ( 4) 00:11:16.071 25.367 - 25.484: 99.3659% ( 1) 00:11:16.071 25.484 - 25.600: 99.3746% ( 1) 00:11:16.071 25.600 - 25.716: 99.4006% ( 3) 00:11:16.071 26.065 - 26.182: 99.4267% ( 3) 00:11:16.071 26.182 - 26.298: 99.4354% ( 1) 00:11:16.071 26.298 - 26.415: 99.4441% ( 1) 00:11:16.071 26.531 - 26.647: 99.4701% ( 3) 00:11:16.071 26.647 - 26.764: 99.4788% ( 1) 00:11:16.071 26.880 - 26.996: 99.4875% ( 1) 00:11:16.071 27.113 - 27.229: 99.4962% ( 1) 00:11:16.071 27.695 - 27.811: 99.5222% ( 3) 00:11:16.071 27.811 - 27.927: 99.5309% ( 1) 00:11:16.071 28.160 - 28.276: 99.5483% ( 2) 00:11:16.071 28.276 - 28.393: 99.5570% ( 1) 00:11:16.071 28.393 - 28.509: 99.5657% ( 1) 00:11:16.071 28.625 - 28.742: 99.5744% ( 1) 00:11:16.071 28.742 - 28.858: 99.5830% ( 1) 00:11:16.071 28.975 - 29.091: 99.6004% ( 2) 00:11:16.071 29.091 - 29.207: 99.6091% ( 1) 00:11:16.071 29.324 - 29.440: 99.6178% ( 1) 00:11:16.071 29.556 - 29.673: 99.6265% ( 1) 00:11:16.071 29.789 - 30.022: 99.6438% ( 2) 00:11:16.071 30.022 - 30.255: 99.6873% ( 5) 00:11:16.071 30.487 - 30.720: 99.6960% ( 1) 00:11:16.071 30.720 - 30.953: 99.7133% ( 2) 00:11:16.071 30.953 - 31.185: 99.7394% ( 3) 00:11:16.071 31.651 - 31.884: 99.7481% ( 1) 00:11:16.071 31.884 - 32.116: 99.7741% ( 3) 00:11:16.071 32.349 - 32.582: 99.7828% ( 1) 00:11:16.071 33.047 - 33.280: 99.8002% ( 2) 00:11:16.071 33.280 - 33.513: 99.8176% ( 2) 00:11:16.071 34.211 - 34.444: 99.8263% ( 1) 00:11:16.071 34.444 - 34.676: 99.8523% ( 3) 00:11:16.071 34.676 - 34.909: 99.8610% ( 1) 00:11:16.071 35.607 - 35.840: 99.8871% ( 3) 00:11:16.071 36.305 - 36.538: 99.9131% ( 3) 00:11:16.071 37.004 - 37.236: 99.9218% ( 1) 00:11:16.071 37.935 - 38.167: 99.9392% ( 2) 00:11:16.071 38.865 - 39.098: 99.9479% ( 1) 00:11:16.071 39.331 - 39.564: 99.9566% ( 1) 00:11:16.071 41.193 - 41.425: 99.9653% ( 1) 00:11:16.071 42.124 - 42.356: 99.9739% ( 1) 00:11:16.071 45.615 - 45.847: 99.9826% ( 1) 00:11:16.071 54.225 - 54.458: 99.9913% ( 1) 00:11:16.071 78.196 - 78.662: 100.0000% ( 1) 00:11:16.071 00:11:16.071 ************************************ 00:11:16.071 END TEST nvme_overhead 00:11:16.071 ************************************ 00:11:16.071 00:11:16.071 real 0m1.345s 00:11:16.071 user 0m1.124s 00:11:16.071 sys 0m0.158s 00:11:16.071 11:19:38 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.071 11:19:38 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:16.071 11:19:38 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:16.071 11:19:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:16.071 11:19:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.071 11:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.071 ************************************ 00:11:16.071 START TEST nvme_arbitration 00:11:16.071 ************************************ 00:11:16.071 11:19:38 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:20.257 Initializing NVMe Controllers 00:11:20.257 Attached to 0000:00:10.0 00:11:20.257 Attached to 0000:00:11.0 00:11:20.257 Attached to 0000:00:13.0 00:11:20.257 Attached to 0000:00:12.0 00:11:20.257 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:20.257 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:20.257 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:20.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:20.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:20.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:20.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:20.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:20.257 Initialization complete. Launching workers. 00:11:20.257 Starting thread on core 1 with urgent priority queue 00:11:20.257 Starting thread on core 2 with urgent priority queue 00:11:20.257 Starting thread on core 3 with urgent priority queue 00:11:20.257 Starting thread on core 0 with urgent priority queue 00:11:20.257 QEMU NVMe Ctrl (12340 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:11:20.257 QEMU NVMe Ctrl (12342 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:11:20.257 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:11:20.257 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:11:20.257 QEMU NVMe Ctrl (12343 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:11:20.257 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:11:20.257 ======================================================== 00:11:20.257 00:11:20.257 ************************************ 00:11:20.257 END TEST nvme_arbitration 00:11:20.257 ************************************ 00:11:20.257 00:11:20.257 real 0m3.497s 00:11:20.257 user 0m9.407s 00:11:20.257 sys 0m0.185s 00:11:20.257 11:19:41 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.257 11:19:41 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:20.257 11:19:41 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.257 ************************************ 00:11:20.257 START TEST nvme_single_aen 00:11:20.257 ************************************ 00:11:20.257 11:19:41 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:20.257 Asynchronous Event Request test 00:11:20.257 Attached to 0000:00:10.0 00:11:20.257 Attached to 0000:00:11.0 00:11:20.257 Attached to 0000:00:13.0 00:11:20.257 Attached to 0000:00:12.0 00:11:20.257 Reset controller to setup AER completions for this process 00:11:20.257 Registering asynchronous event callbacks... 00:11:20.257 Getting orig temperature thresholds of all controllers 00:11:20.257 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.257 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.257 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.257 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.257 Setting all controllers temperature threshold low to trigger AER 00:11:20.257 Waiting for all controllers temperature threshold to be set lower 00:11:20.257 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.257 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:20.257 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.257 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:20.257 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.257 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:20.257 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.257 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:20.257 Waiting for all controllers to trigger AER and reset threshold 00:11:20.257 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.257 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.257 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.257 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.257 Cleaning up... 00:11:20.257 ************************************ 00:11:20.257 END TEST nvme_single_aen 00:11:20.257 ************************************ 00:11:20.257 00:11:20.257 real 0m0.321s 00:11:20.257 user 0m0.121s 00:11:20.257 sys 0m0.143s 00:11:20.257 11:19:41 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.257 11:19:41 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:20.257 11:19:41 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.257 11:19:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.257 ************************************ 00:11:20.257 START TEST nvme_doorbell_aers 00:11:20.257 ************************************ 00:11:20.257 11:19:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:20.257 11:19:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:20.257 11:19:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:20.257 11:19:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:20.257 11:19:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:20.257 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:20.258 11:19:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:20.258 [2024-12-10 11:19:42.387368] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:11:30.228 Executing: test_write_invalid_db 00:11:30.228 Waiting for AER completion... 00:11:30.228 Failure: test_write_invalid_db 00:11:30.228 00:11:30.228 Executing: test_invalid_db_write_overflow_sq 00:11:30.228 Waiting for AER completion... 00:11:30.228 Failure: test_invalid_db_write_overflow_sq 00:11:30.228 00:11:30.228 Executing: test_invalid_db_write_overflow_cq 00:11:30.228 Waiting for AER completion... 00:11:30.228 Failure: test_invalid_db_write_overflow_cq 00:11:30.228 00:11:30.228 11:19:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:30.228 11:19:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:30.485 [2024-12-10 11:19:52.407055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:11:40.485 Executing: test_write_invalid_db 00:11:40.485 Waiting for AER completion... 00:11:40.485 Failure: test_write_invalid_db 00:11:40.485 00:11:40.485 Executing: test_invalid_db_write_overflow_sq 00:11:40.485 Waiting for AER completion... 00:11:40.485 Failure: test_invalid_db_write_overflow_sq 00:11:40.485 00:11:40.485 Executing: test_invalid_db_write_overflow_cq 00:11:40.485 Waiting for AER completion... 00:11:40.485 Failure: test_invalid_db_write_overflow_cq 00:11:40.485 00:11:40.485 11:20:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:40.486 11:20:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:40.486 [2024-12-10 11:20:02.447363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:11:50.477 Executing: test_write_invalid_db 00:11:50.477 Waiting for AER completion... 00:11:50.477 Failure: test_write_invalid_db 00:11:50.477 00:11:50.477 Executing: test_invalid_db_write_overflow_sq 00:11:50.477 Waiting for AER completion... 00:11:50.477 Failure: test_invalid_db_write_overflow_sq 00:11:50.477 00:11:50.477 Executing: test_invalid_db_write_overflow_cq 00:11:50.477 Waiting for AER completion... 00:11:50.477 Failure: test_invalid_db_write_overflow_cq 00:11:50.477 00:11:50.477 11:20:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:50.477 11:20:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:50.477 [2024-12-10 11:20:12.516573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.445 Executing: test_write_invalid_db 00:12:00.445 Waiting for AER completion... 00:12:00.445 Failure: test_write_invalid_db 00:12:00.445 00:12:00.445 Executing: test_invalid_db_write_overflow_sq 00:12:00.445 Waiting for AER completion... 00:12:00.445 Failure: test_invalid_db_write_overflow_sq 00:12:00.445 00:12:00.445 Executing: test_invalid_db_write_overflow_cq 00:12:00.445 Waiting for AER completion... 00:12:00.445 Failure: test_invalid_db_write_overflow_cq 00:12:00.445 00:12:00.445 00:12:00.445 real 0m40.256s 00:12:00.445 user 0m34.148s 00:12:00.445 sys 0m5.703s 00:12:00.445 11:20:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.445 11:20:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:00.445 ************************************ 00:12:00.445 END TEST nvme_doorbell_aers 00:12:00.445 ************************************ 00:12:00.445 11:20:22 nvme -- nvme/nvme.sh@97 -- # uname 00:12:00.445 11:20:22 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:00.445 11:20:22 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:00.445 11:20:22 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:00.445 11:20:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.445 11:20:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:00.445 ************************************ 00:12:00.445 START TEST nvme_multi_aen 00:12:00.445 ************************************ 00:12:00.445 11:20:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:00.704 [2024-12-10 11:20:22.626962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.627087] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.627114] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.629028] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.629092] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.629115] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.630678] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.630737] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.630762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.632326] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.632385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 [2024-12-10 11:20:22.632408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:12:00.704 Child process pid: 65476 00:12:00.962 [Child] Asynchronous Event Request test 00:12:00.962 [Child] Attached to 0000:00:10.0 00:12:00.962 [Child] Attached to 0000:00:11.0 00:12:00.962 [Child] Attached to 0000:00:13.0 00:12:00.962 [Child] Attached to 0000:00:12.0 00:12:00.963 [Child] Registering asynchronous event callbacks... 00:12:00.963 [Child] Getting orig temperature thresholds of all controllers 00:12:00.963 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:00.963 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 [Child] Cleaning up... 00:12:00.963 Asynchronous Event Request test 00:12:00.963 Attached to 0000:00:10.0 00:12:00.963 Attached to 0000:00:11.0 00:12:00.963 Attached to 0000:00:13.0 00:12:00.963 Attached to 0000:00:12.0 00:12:00.963 Reset controller to setup AER completions for this process 00:12:00.963 Registering asynchronous event callbacks... 00:12:00.963 Getting orig temperature thresholds of all controllers 00:12:00.963 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:00.963 Setting all controllers temperature threshold low to trigger AER 00:12:00.963 Waiting for all controllers temperature threshold to be set lower 00:12:00.963 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:00.963 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:00.963 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:00.963 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:00.963 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:00.963 Waiting for all controllers to trigger AER and reset threshold 00:12:00.963 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:00.963 Cleaning up... 00:12:00.963 00:12:00.963 real 0m0.774s 00:12:00.963 user 0m0.326s 00:12:00.963 sys 0m0.331s 00:12:00.963 11:20:23 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.963 11:20:23 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:00.963 ************************************ 00:12:00.963 END TEST nvme_multi_aen 00:12:00.963 ************************************ 00:12:00.963 11:20:23 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:00.963 11:20:23 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:00.963 11:20:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.963 11:20:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.221 ************************************ 00:12:01.221 START TEST nvme_startup 00:12:01.221 ************************************ 00:12:01.221 11:20:23 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:01.480 Initializing NVMe Controllers 00:12:01.480 Attached to 0000:00:10.0 00:12:01.480 Attached to 0000:00:11.0 00:12:01.480 Attached to 0000:00:13.0 00:12:01.480 Attached to 0000:00:12.0 00:12:01.480 Initialization complete. 00:12:01.480 Time used:218979.219 (us). 00:12:01.480 ************************************ 00:12:01.480 END TEST nvme_startup 00:12:01.480 ************************************ 00:12:01.480 00:12:01.480 real 0m0.336s 00:12:01.480 user 0m0.135s 00:12:01.480 sys 0m0.158s 00:12:01.480 11:20:23 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.480 11:20:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:01.480 11:20:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:01.480 11:20:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.480 11:20:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.480 11:20:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.480 ************************************ 00:12:01.480 START TEST nvme_multi_secondary 00:12:01.480 ************************************ 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65533 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65534 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:01.480 11:20:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:05.659 Initializing NVMe Controllers 00:12:05.659 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.659 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.659 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.659 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.659 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:05.659 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:05.659 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:05.659 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:05.659 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:05.659 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:05.659 Initialization complete. Launching workers. 00:12:05.659 ======================================================== 00:12:05.659 Latency(us) 00:12:05.659 Device Information : IOPS MiB/s Average min max 00:12:05.659 PCIE (0000:00:10.0) NSID 1 from core 1: 4912.71 19.19 3254.78 1332.21 8703.32 00:12:05.659 PCIE (0000:00:11.0) NSID 1 from core 1: 4912.71 19.19 3256.21 1425.28 9013.27 00:12:05.659 PCIE (0000:00:13.0) NSID 1 from core 1: 4912.71 19.19 3256.19 1222.61 8647.22 00:12:05.659 PCIE (0000:00:12.0) NSID 1 from core 1: 4912.71 19.19 3256.20 1347.87 9171.93 00:12:05.659 PCIE (0000:00:12.0) NSID 2 from core 1: 4912.71 19.19 3256.39 1419.99 9468.47 00:12:05.659 PCIE (0000:00:12.0) NSID 3 from core 1: 4912.71 19.19 3256.61 1449.43 9688.02 00:12:05.659 ======================================================== 00:12:05.659 Total : 29476.26 115.14 3256.06 1222.61 9688.02 00:12:05.659 00:12:05.659 Initializing NVMe Controllers 00:12:05.659 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.659 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.659 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.660 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.660 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:05.660 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:05.660 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:05.660 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:05.660 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:05.660 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:05.660 Initialization complete. Launching workers. 00:12:05.660 ======================================================== 00:12:05.660 Latency(us) 00:12:05.660 Device Information : IOPS MiB/s Average min max 00:12:05.660 PCIE (0000:00:10.0) NSID 1 from core 2: 2254.62 8.81 7093.18 1402.05 19641.09 00:12:05.660 PCIE (0000:00:11.0) NSID 1 from core 2: 2254.62 8.81 7095.02 1474.84 19552.03 00:12:05.660 PCIE (0000:00:13.0) NSID 1 from core 2: 2254.62 8.81 7095.19 1392.03 19517.41 00:12:05.660 PCIE (0000:00:12.0) NSID 1 from core 2: 2254.62 8.81 7095.07 1397.09 19168.18 00:12:05.660 PCIE (0000:00:12.0) NSID 2 from core 2: 2254.62 8.81 7094.97 1478.58 19260.88 00:12:05.660 PCIE (0000:00:12.0) NSID 3 from core 2: 2254.62 8.81 7096.67 1404.51 19507.69 00:12:05.660 ======================================================== 00:12:05.660 Total : 13527.73 52.84 7095.02 1392.03 19641.09 00:12:05.660 00:12:05.660 11:20:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65533 00:12:07.033 Initializing NVMe Controllers 00:12:07.033 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:07.033 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:07.033 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:07.033 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:07.033 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:07.033 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:07.033 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:07.033 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:07.033 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:07.033 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:07.033 Initialization complete. Launching workers. 00:12:07.033 ======================================================== 00:12:07.033 Latency(us) 00:12:07.033 Device Information : IOPS MiB/s Average min max 00:12:07.033 PCIE (0000:00:10.0) NSID 1 from core 0: 7199.29 28.12 2220.65 922.56 8358.33 00:12:07.033 PCIE (0000:00:11.0) NSID 1 from core 0: 7199.29 28.12 2221.90 947.33 8799.65 00:12:07.033 PCIE (0000:00:13.0) NSID 1 from core 0: 7199.29 28.12 2221.89 946.09 8310.08 00:12:07.034 PCIE (0000:00:12.0) NSID 1 from core 0: 7199.29 28.12 2221.87 926.44 7754.70 00:12:07.034 PCIE (0000:00:12.0) NSID 2 from core 0: 7199.29 28.12 2221.90 951.69 7426.95 00:12:07.034 PCIE (0000:00:12.0) NSID 3 from core 0: 7199.29 28.12 2221.89 943.52 7569.06 00:12:07.034 ======================================================== 00:12:07.034 Total : 43195.71 168.73 2221.69 922.56 8799.65 00:12:07.034 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65534 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65605 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65606 00:12:07.034 11:20:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:10.316 Initializing NVMe Controllers 00:12:10.316 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.316 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.316 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.316 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.316 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:10.316 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:10.316 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:10.316 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:10.316 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:10.316 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:10.316 Initialization complete. Launching workers. 00:12:10.316 ======================================================== 00:12:10.316 Latency(us) 00:12:10.316 Device Information : IOPS MiB/s Average min max 00:12:10.317 PCIE (0000:00:10.0) NSID 1 from core 0: 5047.42 19.72 3167.77 936.61 7570.73 00:12:10.317 PCIE (0000:00:11.0) NSID 1 from core 0: 5047.42 19.72 3168.95 954.33 7257.15 00:12:10.317 PCIE (0000:00:13.0) NSID 1 from core 0: 5047.42 19.72 3168.81 984.58 7708.82 00:12:10.317 PCIE (0000:00:12.0) NSID 1 from core 0: 5047.42 19.72 3168.50 973.07 7775.08 00:12:10.317 PCIE (0000:00:12.0) NSID 2 from core 0: 5047.42 19.72 3168.33 970.01 7759.33 00:12:10.317 PCIE (0000:00:12.0) NSID 3 from core 0: 5047.42 19.72 3168.09 965.08 7752.14 00:12:10.317 ======================================================== 00:12:10.317 Total : 30284.51 118.30 3168.41 936.61 7775.08 00:12:10.317 00:12:10.317 Initializing NVMe Controllers 00:12:10.317 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.317 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.317 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.317 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.317 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:10.317 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:10.317 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:10.317 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:10.317 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:10.317 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:10.317 Initialization complete. Launching workers. 00:12:10.317 ======================================================== 00:12:10.317 Latency(us) 00:12:10.317 Device Information : IOPS MiB/s Average min max 00:12:10.317 PCIE (0000:00:10.0) NSID 1 from core 1: 4701.35 18.36 3400.83 1095.62 13644.45 00:12:10.317 PCIE (0000:00:11.0) NSID 1 from core 1: 4701.35 18.36 3402.13 1132.35 13707.69 00:12:10.317 PCIE (0000:00:13.0) NSID 1 from core 1: 4701.35 18.36 3401.82 1134.33 12715.87 00:12:10.317 PCIE (0000:00:12.0) NSID 1 from core 1: 4701.35 18.36 3401.58 910.25 12939.08 00:12:10.317 PCIE (0000:00:12.0) NSID 2 from core 1: 4701.35 18.36 3401.35 873.50 12959.04 00:12:10.317 PCIE (0000:00:12.0) NSID 3 from core 1: 4701.35 18.36 3401.11 821.44 13267.34 00:12:10.317 ======================================================== 00:12:10.317 Total : 28208.09 110.19 3401.47 821.44 13707.69 00:12:10.317 00:12:12.219 Initializing NVMe Controllers 00:12:12.219 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:12.219 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:12.219 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:12.219 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:12.219 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:12.219 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:12.219 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:12.219 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:12.219 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:12.219 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:12.219 Initialization complete. Launching workers. 00:12:12.219 ======================================================== 00:12:12.219 Latency(us) 00:12:12.219 Device Information : IOPS MiB/s Average min max 00:12:12.219 PCIE (0000:00:10.0) NSID 1 from core 2: 3221.96 12.59 4963.15 1013.59 19245.57 00:12:12.219 PCIE (0000:00:11.0) NSID 1 from core 2: 3221.96 12.59 4964.87 1041.77 18901.52 00:12:12.219 PCIE (0000:00:13.0) NSID 1 from core 2: 3221.96 12.59 4964.82 1047.97 18761.12 00:12:12.219 PCIE (0000:00:12.0) NSID 1 from core 2: 3221.96 12.59 4964.50 1050.87 16849.99 00:12:12.219 PCIE (0000:00:12.0) NSID 2 from core 2: 3225.15 12.60 4956.28 1029.05 16153.66 00:12:12.219 PCIE (0000:00:12.0) NSID 3 from core 2: 3225.15 12.60 4955.97 883.60 19650.32 00:12:12.219 ======================================================== 00:12:12.219 Total : 19338.14 75.54 4961.59 883.60 19650.32 00:12:12.219 00:12:12.219 ************************************ 00:12:12.219 END TEST nvme_multi_secondary 00:12:12.219 ************************************ 00:12:12.219 11:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65605 00:12:12.219 11:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65606 00:12:12.219 00:12:12.219 real 0m10.752s 00:12:12.219 user 0m18.653s 00:12:12.219 sys 0m1.057s 00:12:12.219 11:20:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.219 11:20:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:12.219 11:20:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:12.219 11:20:34 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:12.219 11:20:34 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64536 ]] 00:12:12.219 11:20:34 nvme -- common/autotest_common.sh@1094 -- # kill 64536 00:12:12.219 11:20:34 nvme -- common/autotest_common.sh@1095 -- # wait 64536 00:12:12.219 [2024-12-10 11:20:34.307733] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.307861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.307925] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.307962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.311906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.312015] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.312058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.312095] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.315608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.315715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.315750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.315789] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.318355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.318666] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.318718] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.219 [2024-12-10 11:20:34.318753] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65471) is not found. Dropping the request. 00:12:12.478 11:20:34 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:12.478 11:20:34 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:12.478 11:20:34 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:12.478 11:20:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.478 11:20:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.478 11:20:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.478 ************************************ 00:12:12.478 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:12.478 ************************************ 00:12:12.478 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:12.478 * Looking for test storage... 00:12:12.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:12.478 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.478 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.478 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.737 --rc genhtml_branch_coverage=1 00:12:12.737 --rc genhtml_function_coverage=1 00:12:12.737 --rc genhtml_legend=1 00:12:12.737 --rc geninfo_all_blocks=1 00:12:12.737 --rc geninfo_unexecuted_blocks=1 00:12:12.737 00:12:12.737 ' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.737 --rc genhtml_branch_coverage=1 00:12:12.737 --rc genhtml_function_coverage=1 00:12:12.737 --rc genhtml_legend=1 00:12:12.737 --rc geninfo_all_blocks=1 00:12:12.737 --rc geninfo_unexecuted_blocks=1 00:12:12.737 00:12:12.737 ' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.737 --rc genhtml_branch_coverage=1 00:12:12.737 --rc genhtml_function_coverage=1 00:12:12.737 --rc genhtml_legend=1 00:12:12.737 --rc geninfo_all_blocks=1 00:12:12.737 --rc geninfo_unexecuted_blocks=1 00:12:12.737 00:12:12.737 ' 00:12:12.737 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.737 --rc genhtml_branch_coverage=1 00:12:12.737 --rc genhtml_function_coverage=1 00:12:12.737 --rc genhtml_legend=1 00:12:12.737 --rc geninfo_all_blocks=1 00:12:12.737 --rc geninfo_unexecuted_blocks=1 00:12:12.737 00:12:12.737 ' 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:12.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65765 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65765 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65765 ']' 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.738 11:20:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 [2024-12-10 11:20:34.926732] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:12.996 [2024-12-10 11:20:34.927141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65765 ] 00:12:12.996 [2024-12-10 11:20:35.136152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.255 [2024-12-10 11:20:35.269715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.255 [2024-12-10 11:20:35.269809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.255 [2024-12-10 11:20:35.269878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.255 [2024-12-10 11:20:35.269883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:14.189 nvme0n1 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iWQtO.txt 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:14.189 true 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733829636 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65794 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:14.189 11:20:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:16.717 [2024-12-10 11:20:38.308077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:16.717 [2024-12-10 11:20:38.308434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:16.717 [2024-12-10 11:20:38.308471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.717 [2024-12-10 11:20:38.308491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.717 [2024-12-10 11:20:38.310475] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:16.717 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65794 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65794 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65794 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iWQtO.txt 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:16.717 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iWQtO.txt 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65765 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65765 ']' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65765 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65765 00:12:16.718 killing process with pid 65765 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65765' 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65765 00:12:16.718 11:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65765 00:12:18.618 11:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:18.618 11:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:18.618 00:12:18.618 real 0m6.050s 00:12:18.618 user 0m21.273s 00:12:18.618 sys 0m0.640s 00:12:18.618 11:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.618 ************************************ 00:12:18.618 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:18.618 ************************************ 00:12:18.618 11:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:18.618 11:20:40 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:18.618 11:20:40 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:18.618 11:20:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.619 11:20:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.619 11:20:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.619 ************************************ 00:12:18.619 START TEST nvme_fio 00:12:18.619 ************************************ 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:18.619 11:20:40 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:18.619 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:18.877 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:18.877 11:20:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:19.135 11:20:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:19.135 11:20:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:19.135 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:19.394 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:19.394 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:19.394 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:19.394 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:19.394 11:20:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:19.394 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:19.394 fio-3.35 00:12:19.394 Starting 1 thread 00:12:22.676 00:12:22.676 test: (groupid=0, jobs=1): err= 0: pid=65941: Tue Dec 10 11:20:44 2024 00:12:22.676 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2001msec) 00:12:22.676 slat (usec): min=4, max=556, avg= 7.65, stdev= 5.06 00:12:22.676 clat (usec): min=326, max=9351, avg=4653.78, stdev=1162.61 00:12:22.676 lat (usec): min=332, max=9360, avg=4661.44, stdev=1164.79 00:12:22.676 clat percentiles (usec): 00:12:22.676 | 1.00th=[ 2802], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3785], 00:12:22.676 | 30.00th=[ 3949], 40.00th=[ 4178], 50.00th=[ 4490], 60.00th=[ 4621], 00:12:22.676 | 70.00th=[ 4817], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7373], 00:12:22.676 | 99.00th=[ 7963], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 8979], 00:12:22.676 | 99.99th=[ 9241] 00:12:22.676 bw ( KiB/s): min=52568, max=56680, per=99.89%, avg=54818.67, stdev=2083.46, samples=3 00:12:22.676 iops : min=13142, max=14170, avg=13704.67, stdev=520.87, samples=3 00:12:22.676 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2001msec); 0 zone resets 00:12:22.676 slat (usec): min=4, max=115, avg= 7.68, stdev= 3.78 00:12:22.676 clat (usec): min=337, max=9110, avg=4646.38, stdev=1154.83 00:12:22.676 lat (usec): min=343, max=9119, avg=4654.06, stdev=1156.91 00:12:22.676 clat percentiles (usec): 00:12:22.676 | 1.00th=[ 2802], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3752], 00:12:22.676 | 30.00th=[ 3949], 40.00th=[ 4178], 50.00th=[ 4490], 60.00th=[ 4621], 00:12:22.676 | 70.00th=[ 4817], 80.00th=[ 5407], 90.00th=[ 6259], 95.00th=[ 7373], 00:12:22.676 | 99.00th=[ 7963], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 8979], 00:12:22.676 | 99.99th=[ 8979] 00:12:22.676 bw ( KiB/s): min=52432, max=56992, per=100.00%, avg=54834.67, stdev=2289.88, samples=3 00:12:22.676 iops : min=13108, max=14248, avg=13708.67, stdev=572.47, samples=3 00:12:22.676 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:12:22.676 lat (msec) : 2=0.07%, 4=33.13%, 10=66.76% 00:12:22.676 cpu : usr=98.75%, sys=0.05%, ctx=4, majf=0, minf=609 00:12:22.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:22.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.676 issued rwts: total=27452,27403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.676 00:12:22.676 Run status group 0 (all jobs): 00:12:22.676 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:22.676 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:22.935 ----------------------------------------------------- 00:12:22.935 Suppressions used: 00:12:22.935 count bytes template 00:12:22.935 1 32 /usr/src/fio/parse.c 00:12:22.935 1 8 libtcmalloc_minimal.so 00:12:22.935 ----------------------------------------------------- 00:12:22.935 00:12:22.935 11:20:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:22.935 11:20:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:22.935 11:20:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:22.935 11:20:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:23.193 11:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:23.193 11:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:23.450 11:20:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:23.450 11:20:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.450 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.451 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.451 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:23.451 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:23.451 11:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:23.708 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:23.708 fio-3.35 00:12:23.708 Starting 1 thread 00:12:27.000 00:12:27.000 test: (groupid=0, jobs=1): err= 0: pid=66007: Tue Dec 10 11:20:48 2024 00:12:27.000 read: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec) 00:12:27.000 slat (nsec): min=4561, max=63383, avg=6635.47, stdev=2564.14 00:12:27.000 clat (usec): min=279, max=12174, avg=4335.54, stdev=1047.25 00:12:27.000 lat (usec): min=284, max=12180, avg=4342.18, stdev=1048.07 00:12:27.000 clat percentiles (usec): 00:12:27.000 | 1.00th=[ 2409], 5.00th=[ 3097], 10.00th=[ 3392], 20.00th=[ 3556], 00:12:27.000 | 30.00th=[ 3720], 40.00th=[ 4080], 50.00th=[ 4359], 60.00th=[ 4424], 00:12:27.000 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5342], 95.00th=[ 6325], 00:12:27.000 | 99.00th=[ 8356], 99.50th=[ 9241], 99.90th=[10683], 99.95th=[11338], 00:12:27.000 | 99.99th=[11994] 00:12:27.000 bw ( KiB/s): min=58024, max=59800, per=100.00%, avg=58997.33, stdev=900.22, samples=3 00:12:27.000 iops : min=14506, max=14950, avg=14749.33, stdev=225.05, samples=3 00:12:27.000 write: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec); 0 zone resets 00:12:27.000 slat (nsec): min=4659, max=84743, avg=6700.83, stdev=2511.27 00:12:27.000 clat (usec): min=300, max=12254, avg=4345.41, stdev=1047.90 00:12:27.000 lat (usec): min=307, max=12262, avg=4352.11, stdev=1048.73 00:12:27.000 clat percentiles (usec): 00:12:27.000 | 1.00th=[ 2442], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3556], 00:12:27.000 | 30.00th=[ 3752], 40.00th=[ 4080], 50.00th=[ 4359], 60.00th=[ 4424], 00:12:27.000 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5407], 95.00th=[ 6390], 00:12:27.000 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[10945], 00:12:27.000 | 99.99th=[11994] 00:12:27.000 bw ( KiB/s): min=57960, max=59480, per=100.00%, avg=58818.67, stdev=778.98, samples=3 00:12:27.000 iops : min=14490, max=14870, avg=14704.67, stdev=194.74, samples=3 00:12:27.000 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:12:27.000 lat (msec) : 2=0.32%, 4=38.49%, 10=60.96%, 20=0.19% 00:12:27.000 cpu : usr=97.95%, sys=0.10%, ctx=5, majf=0, minf=608 00:12:27.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:27.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.000 issued rwts: total=29373,29411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.000 00:12:27.000 Run status group 0 (all jobs): 00:12:27.000 READ: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:12:27.000 WRITE: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (120MB), run=2001-2001msec 00:12:27.000 ----------------------------------------------------- 00:12:27.000 Suppressions used: 00:12:27.000 count bytes template 00:12:27.000 1 32 /usr/src/fio/parse.c 00:12:27.000 1 8 libtcmalloc_minimal.so 00:12:27.000 ----------------------------------------------------- 00:12:27.000 00:12:27.000 11:20:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:27.000 11:20:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:27.000 11:20:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:27.000 11:20:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:27.258 11:20:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:27.258 11:20:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:27.516 11:20:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:27.516 11:20:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:27.516 11:20:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:27.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:27.775 fio-3.35 00:12:27.775 Starting 1 thread 00:12:31.059 00:12:31.059 test: (groupid=0, jobs=1): err= 0: pid=66068: Tue Dec 10 11:20:52 2024 00:12:31.059 read: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(100MiB/2002msec) 00:12:31.059 slat (nsec): min=4547, max=51652, avg=8231.53, stdev=4469.40 00:12:31.059 clat (usec): min=1081, max=13576, avg=4996.15, stdev=1476.09 00:12:31.059 lat (usec): min=1104, max=13590, avg=5004.39, stdev=1478.94 00:12:31.059 clat percentiles (usec): 00:12:31.059 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3589], 20.00th=[ 4047], 00:12:31.059 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:12:31.059 | 70.00th=[ 4948], 80.00th=[ 6456], 90.00th=[ 7111], 95.00th=[ 8029], 00:12:31.059 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11207], 99.95th=[11731], 00:12:31.059 | 99.99th=[13042] 00:12:31.059 bw ( KiB/s): min=47680, max=56016, per=99.16%, avg=50741.33, stdev=4587.63, samples=3 00:12:31.059 iops : min=11920, max=14004, avg=12685.33, stdev=1146.91, samples=3 00:12:31.059 write: IOPS=12.8k, BW=49.8MiB/s (52.3MB/s)(99.8MiB/2002msec); 0 zone resets 00:12:31.059 slat (usec): min=4, max=539, avg= 8.29, stdev= 5.57 00:12:31.059 clat (usec): min=1151, max=13882, avg=4987.78, stdev=1472.32 00:12:31.059 lat (usec): min=1174, max=13909, avg=4996.07, stdev=1475.17 00:12:31.059 clat percentiles (usec): 00:12:31.059 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3621], 20.00th=[ 4015], 00:12:31.059 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:12:31.059 | 70.00th=[ 4948], 80.00th=[ 6390], 90.00th=[ 7177], 95.00th=[ 8029], 00:12:31.059 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10945], 99.95th=[11863], 00:12:31.059 | 99.99th=[13435] 00:12:31.059 bw ( KiB/s): min=46824, max=56464, per=99.53%, avg=50789.33, stdev=5042.20, samples=3 00:12:31.060 iops : min=11706, max=14116, avg=12697.33, stdev=1260.55, samples=3 00:12:31.060 lat (msec) : 2=0.18%, 4=19.16%, 10=80.41%, 20=0.25% 00:12:31.060 cpu : usr=97.70%, sys=0.05%, ctx=4, majf=0, minf=608 00:12:31.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:31.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:31.060 issued rwts: total=25612,25541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:31.060 00:12:31.060 Run status group 0 (all jobs): 00:12:31.060 READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=100MiB (105MB), run=2002-2002msec 00:12:31.060 WRITE: bw=49.8MiB/s (52.3MB/s), 49.8MiB/s-49.8MiB/s (52.3MB/s-52.3MB/s), io=99.8MiB (105MB), run=2002-2002msec 00:12:31.060 ----------------------------------------------------- 00:12:31.060 Suppressions used: 00:12:31.060 count bytes template 00:12:31.060 1 32 /usr/src/fio/parse.c 00:12:31.060 1 8 libtcmalloc_minimal.so 00:12:31.060 ----------------------------------------------------- 00:12:31.060 00:12:31.060 11:20:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:31.060 11:20:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:31.060 11:20:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:31.060 11:20:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:31.318 11:20:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:31.318 11:20:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:31.576 11:20:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:31.576 11:20:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:31.576 11:20:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:31.576 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:31.576 fio-3.35 00:12:31.576 Starting 1 thread 00:12:35.760 00:12:35.760 test: (groupid=0, jobs=1): err= 0: pid=66134: Tue Dec 10 11:20:57 2024 00:12:35.760 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2001msec) 00:12:35.760 slat (nsec): min=4601, max=54029, avg=7433.73, stdev=2597.75 00:12:35.760 clat (usec): min=248, max=9562, avg=4663.28, stdev=687.45 00:12:35.760 lat (usec): min=254, max=9573, avg=4670.71, stdev=688.31 00:12:35.760 clat percentiles (usec): 00:12:35.760 | 1.00th=[ 3392], 5.00th=[ 3752], 10.00th=[ 3884], 20.00th=[ 4228], 00:12:35.760 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:12:35.760 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5407], 95.00th=[ 5997], 00:12:35.760 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8848], 00:12:35.760 | 99.99th=[ 9503] 00:12:35.760 bw ( KiB/s): min=49552, max=58328, per=98.59%, avg=53920.00, stdev=4388.14, samples=3 00:12:35.760 iops : min=12388, max=14582, avg=13480.00, stdev=1097.03, samples=3 00:12:35.760 write: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(107MiB/2001msec); 0 zone resets 00:12:35.760 slat (nsec): min=4712, max=43331, avg=7556.95, stdev=2489.76 00:12:35.760 clat (usec): min=317, max=9637, avg=4672.78, stdev=690.06 00:12:35.760 lat (usec): min=323, max=9646, avg=4680.33, stdev=690.92 00:12:35.760 clat percentiles (usec): 00:12:35.760 | 1.00th=[ 3392], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4228], 00:12:35.760 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:12:35.760 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 6063], 00:12:35.760 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 8225], 99.95th=[ 8455], 00:12:35.760 | 99.99th=[ 9241] 00:12:35.760 bw ( KiB/s): min=49928, max=58336, per=98.77%, avg=53941.33, stdev=4216.95, samples=3 00:12:35.760 iops : min=12482, max=14584, avg=13485.33, stdev=1054.24, samples=3 00:12:35.760 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:12:35.760 lat (msec) : 2=0.07%, 4=13.95%, 10=85.93% 00:12:35.760 cpu : usr=98.10%, sys=0.10%, ctx=3, majf=0, minf=607 00:12:35.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:35.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.760 issued rwts: total=27360,27320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.760 00:12:35.760 Run status group 0 (all jobs): 00:12:35.760 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:35.760 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:35.760 ----------------------------------------------------- 00:12:35.760 Suppressions used: 00:12:35.760 count bytes template 00:12:35.760 1 32 /usr/src/fio/parse.c 00:12:35.760 1 8 libtcmalloc_minimal.so 00:12:35.760 ----------------------------------------------------- 00:12:35.760 00:12:35.760 11:20:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:35.760 11:20:57 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:35.760 00:12:35.760 real 0m17.037s 00:12:35.760 user 0m13.528s 00:12:35.760 sys 0m2.255s 00:12:35.761 11:20:57 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.761 11:20:57 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 ************************************ 00:12:35.761 END TEST nvme_fio 00:12:35.761 ************************************ 00:12:35.761 00:12:35.761 real 1m31.522s 00:12:35.761 user 3m47.354s 00:12:35.761 sys 0m14.664s 00:12:35.761 11:20:57 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.761 11:20:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 ************************************ 00:12:35.761 END TEST nvme 00:12:35.761 ************************************ 00:12:35.761 11:20:57 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:35.761 11:20:57 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:35.761 11:20:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.761 11:20:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.761 11:20:57 -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 ************************************ 00:12:35.761 START TEST nvme_scc 00:12:35.761 ************************************ 00:12:35.761 11:20:57 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:35.761 * Looking for test storage... 00:12:35.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:35.761 11:20:57 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.761 11:20:57 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.761 11:20:57 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.761 11:20:57 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.761 11:20:57 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.020 11:20:57 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:36.020 11:20:57 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:36.021 11:20:57 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.021 11:20:57 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 11:20:57 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 11:20:57 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 11:20:57 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 11:20:57 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.021 11:20:57 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.021 11:20:57 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.021 11:20:57 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.021 11:20:57 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.021 11:20:57 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:36.021 11:20:57 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:36.021 11:20:57 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:36.021 11:20:57 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.021 11:20:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:36.021 11:20:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:36.021 11:20:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:36.021 11:20:57 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:36.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.538 Waiting for block devices as requested 00:12:36.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.538 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.538 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.797 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.071 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:42.071 11:21:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:42.071 11:21:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:42.071 11:21:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:42.071 11:21:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:42.071 11:21:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:42.071 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.072 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.073 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.074 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.075 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.076 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:42.077 11:21:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:42.077 11:21:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:42.077 11:21:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:42.077 11:21:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:42.077 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.078 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.079 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.080 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:42.081 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:42.082 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:42.083 11:21:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:42.083 11:21:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:42.083 11:21:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:42.083 11:21:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:42.083 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.084 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:42.085 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.086 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:42.351 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:42.352 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.353 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.354 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.355 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.356 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:42.357 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:42.358 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.359 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:42.360 11:21:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:42.360 11:21:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:42.360 11:21:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:42.360 11:21:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:42.360 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:42.361 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.362 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:42.363 11:21:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:42.363 11:21:04 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:42.622 11:21:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:42.622 11:21:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:42.622 11:21:04 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:42.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.452 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.453 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.453 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.453 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.453 11:21:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:43.453 11:21:05 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.453 11:21:05 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.453 11:21:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:43.711 ************************************ 00:12:43.711 START TEST nvme_simple_copy 00:12:43.711 ************************************ 00:12:43.711 11:21:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:43.969 Initializing NVMe Controllers 00:12:43.969 Attaching to 0000:00:10.0 00:12:43.969 Controller supports SCC. Attached to 0000:00:10.0 00:12:43.969 Namespace ID: 1 size: 6GB 00:12:43.969 Initialization complete. 00:12:43.969 00:12:43.969 Controller QEMU NVMe Ctrl (12340 ) 00:12:43.969 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:43.969 Namespace Block Size:4096 00:12:43.969 Writing LBAs 0 to 63 with Random Data 00:12:43.969 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:43.969 LBAs matching Written Data: 64 00:12:43.969 00:12:43.969 real 0m0.396s 00:12:43.969 user 0m0.202s 00:12:43.969 sys 0m0.091s 00:12:43.969 11:21:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.969 ************************************ 00:12:43.969 END TEST nvme_simple_copy 00:12:43.969 11:21:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.969 ************************************ 00:12:43.969 ************************************ 00:12:43.969 END TEST nvme_scc 00:12:43.969 ************************************ 00:12:43.969 00:12:43.969 real 0m8.304s 00:12:43.969 user 0m1.574s 00:12:43.969 sys 0m1.623s 00:12:43.969 11:21:06 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.969 11:21:06 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:43.969 11:21:06 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:43.969 11:21:06 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:43.969 11:21:06 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:43.969 11:21:06 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:43.969 11:21:06 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:43.969 11:21:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.969 11:21:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.969 11:21:06 -- common/autotest_common.sh@10 -- # set +x 00:12:43.969 ************************************ 00:12:43.969 START TEST nvme_fdp 00:12:43.969 ************************************ 00:12:43.969 11:21:06 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:44.228 * Looking for test storage... 00:12:44.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.228 11:21:06 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.228 11:21:06 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.228 --rc genhtml_branch_coverage=1 00:12:44.228 --rc genhtml_function_coverage=1 00:12:44.228 --rc genhtml_legend=1 00:12:44.229 --rc geninfo_all_blocks=1 00:12:44.229 --rc geninfo_unexecuted_blocks=1 00:12:44.229 00:12:44.229 ' 00:12:44.229 11:21:06 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.229 --rc genhtml_branch_coverage=1 00:12:44.229 --rc genhtml_function_coverage=1 00:12:44.229 --rc genhtml_legend=1 00:12:44.229 --rc geninfo_all_blocks=1 00:12:44.229 --rc geninfo_unexecuted_blocks=1 00:12:44.229 00:12:44.229 ' 00:12:44.229 11:21:06 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.229 --rc genhtml_branch_coverage=1 00:12:44.229 --rc genhtml_function_coverage=1 00:12:44.229 --rc genhtml_legend=1 00:12:44.229 --rc geninfo_all_blocks=1 00:12:44.229 --rc geninfo_unexecuted_blocks=1 00:12:44.229 00:12:44.229 ' 00:12:44.229 11:21:06 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.229 --rc genhtml_branch_coverage=1 00:12:44.229 --rc genhtml_function_coverage=1 00:12:44.229 --rc genhtml_legend=1 00:12:44.229 --rc geninfo_all_blocks=1 00:12:44.229 --rc geninfo_unexecuted_blocks=1 00:12:44.229 00:12:44.229 ' 00:12:44.229 11:21:06 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.229 11:21:06 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.229 11:21:06 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.229 11:21:06 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.229 11:21:06 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.229 11:21:06 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.229 11:21:06 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.229 11:21:06 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.229 11:21:06 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:44.229 11:21:06 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:44.229 11:21:06 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:44.229 11:21:06 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.229 11:21:06 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:44.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.812 Waiting for block devices as requested 00:12:44.812 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.812 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.812 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.071 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.350 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:50.350 11:21:12 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:50.350 11:21:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:50.350 11:21:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:50.350 11:21:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:50.350 11:21:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.350 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.351 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:50.352 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:50.353 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:50.354 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.355 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:50.356 11:21:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:50.356 11:21:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:50.356 11:21:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:50.356 11:21:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:50.356 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:50.357 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.358 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.359 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:50.360 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.361 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:50.362 11:21:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:50.362 11:21:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:50.362 11:21:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:50.362 11:21:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:50.362 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:50.363 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.364 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:50.365 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.366 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.367 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.631 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.632 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.633 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:50.634 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:50.635 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:50.636 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.637 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:50.638 11:21:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:50.638 11:21:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:50.638 11:21:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:50.638 11:21:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.638 11:21:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:50.639 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.640 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:50.641 11:21:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:50.641 11:21:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:50.642 11:21:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:50.642 11:21:12 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:50.642 11:21:12 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:51.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:51.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.776 11:21:13 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:51.776 11:21:13 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:51.776 11:21:13 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.776 11:21:13 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:51.776 ************************************ 00:12:51.776 START TEST nvme_flexible_data_placement 00:12:51.776 ************************************ 00:12:51.776 11:21:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:52.343 Initializing NVMe Controllers 00:12:52.343 Attaching to 0000:00:13.0 00:12:52.343 Controller supports FDP Attached to 0000:00:13.0 00:12:52.343 Namespace ID: 1 Endurance Group ID: 1 00:12:52.343 Initialization complete. 00:12:52.343 00:12:52.343 ================================== 00:12:52.343 == FDP tests for Namespace: #01 == 00:12:52.343 ================================== 00:12:52.343 00:12:52.343 Get Feature: FDP: 00:12:52.343 ================= 00:12:52.343 Enabled: Yes 00:12:52.343 FDP configuration Index: 0 00:12:52.343 00:12:52.343 FDP configurations log page 00:12:52.343 =========================== 00:12:52.343 Number of FDP configurations: 1 00:12:52.343 Version: 0 00:12:52.343 Size: 112 00:12:52.343 FDP Configuration Descriptor: 0 00:12:52.343 Descriptor Size: 96 00:12:52.343 Reclaim Group Identifier format: 2 00:12:52.343 FDP Volatile Write Cache: Not Present 00:12:52.343 FDP Configuration: Valid 00:12:52.343 Vendor Specific Size: 0 00:12:52.343 Number of Reclaim Groups: 2 00:12:52.343 Number of Recalim Unit Handles: 8 00:12:52.343 Max Placement Identifiers: 128 00:12:52.343 Number of Namespaces Suppprted: 256 00:12:52.343 Reclaim unit Nominal Size: 6000000 bytes 00:12:52.343 Estimated Reclaim Unit Time Limit: Not Reported 00:12:52.343 RUH Desc #000: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #001: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #002: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #003: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #004: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #005: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #006: RUH Type: Initially Isolated 00:12:52.343 RUH Desc #007: RUH Type: Initially Isolated 00:12:52.343 00:12:52.343 FDP reclaim unit handle usage log page 00:12:52.343 ====================================== 00:12:52.343 Number of Reclaim Unit Handles: 8 00:12:52.343 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:52.343 RUH Usage Desc #001: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #002: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #003: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #004: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #005: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #006: RUH Attributes: Unused 00:12:52.343 RUH Usage Desc #007: RUH Attributes: Unused 00:12:52.343 00:12:52.343 FDP statistics log page 00:12:52.343 ======================= 00:12:52.343 Host bytes with metadata written: 795095040 00:12:52.343 Media bytes with metadata written: 795234304 00:12:52.343 Media bytes erased: 0 00:12:52.343 00:12:52.343 FDP Reclaim unit handle status 00:12:52.343 ============================== 00:12:52.343 Number of RUHS descriptors: 2 00:12:52.343 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000009bd 00:12:52.343 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:52.343 00:12:52.343 FDP write on placement id: 0 success 00:12:52.343 00:12:52.343 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:52.343 00:12:52.343 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:52.343 00:12:52.343 Get Feature: FDP Events for Placement handle: #0 00:12:52.343 ======================== 00:12:52.343 Number of FDP Events: 6 00:12:52.343 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:52.343 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:52.343 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:52.343 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:52.343 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:52.343 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:52.343 00:12:52.343 FDP events log page 00:12:52.343 =================== 00:12:52.343 Number of FDP events: 1 00:12:52.343 FDP Event #0: 00:12:52.343 Event Type: RU Not Written to Capacity 00:12:52.343 Placement Identifier: Valid 00:12:52.343 NSID: Valid 00:12:52.343 Location: Valid 00:12:52.343 Placement Identifier: 0 00:12:52.343 Event Timestamp: 8 00:12:52.343 Namespace Identifier: 1 00:12:52.343 Reclaim Group Identifier: 0 00:12:52.343 Reclaim Unit Handle Identifier: 0 00:12:52.343 00:12:52.343 FDP test passed 00:12:52.343 00:12:52.343 real 0m0.305s 00:12:52.343 user 0m0.110s 00:12:52.343 sys 0m0.092s 00:12:52.343 11:21:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.343 11:21:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:52.343 ************************************ 00:12:52.343 END TEST nvme_flexible_data_placement 00:12:52.343 ************************************ 00:12:52.343 00:12:52.343 real 0m8.164s 00:12:52.343 user 0m1.487s 00:12:52.343 sys 0m1.665s 00:12:52.343 11:21:14 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.343 11:21:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:52.343 ************************************ 00:12:52.343 END TEST nvme_fdp 00:12:52.343 ************************************ 00:12:52.343 11:21:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:52.343 11:21:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:52.343 11:21:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.343 11:21:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.343 11:21:14 -- common/autotest_common.sh@10 -- # set +x 00:12:52.343 ************************************ 00:12:52.343 START TEST nvme_rpc 00:12:52.343 ************************************ 00:12:52.343 11:21:14 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:52.343 * Looking for test storage... 00:12:52.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:52.343 11:21:14 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.343 11:21:14 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.343 11:21:14 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.343 11:21:14 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:52.343 11:21:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.344 11:21:14 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.344 --rc genhtml_branch_coverage=1 00:12:52.344 --rc genhtml_function_coverage=1 00:12:52.344 --rc genhtml_legend=1 00:12:52.344 --rc geninfo_all_blocks=1 00:12:52.344 --rc geninfo_unexecuted_blocks=1 00:12:52.344 00:12:52.344 ' 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.344 --rc genhtml_branch_coverage=1 00:12:52.344 --rc genhtml_function_coverage=1 00:12:52.344 --rc genhtml_legend=1 00:12:52.344 --rc geninfo_all_blocks=1 00:12:52.344 --rc geninfo_unexecuted_blocks=1 00:12:52.344 00:12:52.344 ' 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.344 --rc genhtml_branch_coverage=1 00:12:52.344 --rc genhtml_function_coverage=1 00:12:52.344 --rc genhtml_legend=1 00:12:52.344 --rc geninfo_all_blocks=1 00:12:52.344 --rc geninfo_unexecuted_blocks=1 00:12:52.344 00:12:52.344 ' 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.344 --rc genhtml_branch_coverage=1 00:12:52.344 --rc genhtml_function_coverage=1 00:12:52.344 --rc genhtml_legend=1 00:12:52.344 --rc geninfo_all_blocks=1 00:12:52.344 --rc geninfo_unexecuted_blocks=1 00:12:52.344 00:12:52.344 ' 00:12:52.344 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.344 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:52.344 11:21:14 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:52.602 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:52.602 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67520 00:12:52.602 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:52.602 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:52.602 11:21:14 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67520 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67520 ']' 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.602 11:21:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.602 [2024-12-10 11:21:14.684775] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:52.602 [2024-12-10 11:21:14.685490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67520 ] 00:12:52.861 [2024-12-10 11:21:14.875951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.123 [2024-12-10 11:21:15.051162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.123 [2024-12-10 11:21:15.051164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.691 11:21:15 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.691 11:21:15 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:53.691 11:21:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:54.320 Nvme0n1 00:12:54.320 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:54.320 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:54.579 request: 00:12:54.579 { 00:12:54.579 "bdev_name": "Nvme0n1", 00:12:54.579 "filename": "non_existing_file", 00:12:54.579 "method": "bdev_nvme_apply_firmware", 00:12:54.579 "req_id": 1 00:12:54.579 } 00:12:54.579 Got JSON-RPC error response 00:12:54.579 response: 00:12:54.579 { 00:12:54.579 "code": -32603, 00:12:54.579 "message": "open file failed." 00:12:54.579 } 00:12:54.579 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:54.579 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:54.579 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:54.837 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:54.837 11:21:16 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67520 00:12:54.837 11:21:16 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67520 ']' 00:12:54.837 11:21:16 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67520 00:12:54.837 11:21:16 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:54.837 11:21:16 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67520 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.838 killing process with pid 67520 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67520' 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67520 00:12:54.838 11:21:16 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67520 00:12:56.740 00:12:56.740 real 0m4.571s 00:12:56.740 user 0m8.929s 00:12:56.740 sys 0m0.659s 00:12:56.740 11:21:18 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.740 11:21:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 ************************************ 00:12:56.740 END TEST nvme_rpc 00:12:56.740 ************************************ 00:12:56.998 11:21:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:56.998 11:21:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.998 11:21:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.998 11:21:18 -- common/autotest_common.sh@10 -- # set +x 00:12:56.998 ************************************ 00:12:56.998 START TEST nvme_rpc_timeouts 00:12:56.998 ************************************ 00:12:56.998 11:21:18 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:56.998 * Looking for test storage... 00:12:56.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.998 11:21:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:56.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.998 --rc genhtml_branch_coverage=1 00:12:56.998 --rc genhtml_function_coverage=1 00:12:56.998 --rc genhtml_legend=1 00:12:56.998 --rc geninfo_all_blocks=1 00:12:56.998 --rc geninfo_unexecuted_blocks=1 00:12:56.998 00:12:56.998 ' 00:12:56.998 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:56.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.998 --rc genhtml_branch_coverage=1 00:12:56.998 --rc genhtml_function_coverage=1 00:12:56.998 --rc genhtml_legend=1 00:12:56.998 --rc geninfo_all_blocks=1 00:12:56.999 --rc geninfo_unexecuted_blocks=1 00:12:56.999 00:12:56.999 ' 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.999 --rc genhtml_branch_coverage=1 00:12:56.999 --rc genhtml_function_coverage=1 00:12:56.999 --rc genhtml_legend=1 00:12:56.999 --rc geninfo_all_blocks=1 00:12:56.999 --rc geninfo_unexecuted_blocks=1 00:12:56.999 00:12:56.999 ' 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.999 --rc genhtml_branch_coverage=1 00:12:56.999 --rc genhtml_function_coverage=1 00:12:56.999 --rc genhtml_legend=1 00:12:56.999 --rc geninfo_all_blocks=1 00:12:56.999 --rc geninfo_unexecuted_blocks=1 00:12:56.999 00:12:56.999 ' 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67604 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67604 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67636 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:56.999 11:21:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67636 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67636 ']' 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.999 11:21:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 [2024-12-10 11:21:19.230982] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:57.257 [2024-12-10 11:21:19.231171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67636 ] 00:12:57.257 [2024-12-10 11:21:19.418896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.515 [2024-12-10 11:21:19.573582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.516 [2024-12-10 11:21:19.573598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.450 11:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.450 Checking default timeout settings: 00:12:58.450 11:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:12:58.450 11:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:58.450 11:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:58.709 Making settings changes with rpc: 00:12:58.709 11:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:58.709 11:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:58.967 Check default vs. modified settings: 00:12:58.967 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:58.967 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:59.535 Setting action_on_timeout is changed as expected. 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:59.535 Setting timeout_us is changed as expected. 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:59.535 Setting timeout_admin_us is changed as expected. 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67604 /tmp/settings_modified_67604 00:12:59.535 11:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67636 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67636 ']' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67636 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67636 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.535 killing process with pid 67636 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67636' 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67636 00:12:59.535 11:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67636 00:13:02.069 RPC TIMEOUT SETTING TEST PASSED. 00:13:02.069 11:21:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:02.069 00:13:02.069 real 0m4.739s 00:13:02.069 user 0m9.347s 00:13:02.069 sys 0m0.603s 00:13:02.069 11:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.069 ************************************ 00:13:02.069 11:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:02.069 END TEST nvme_rpc_timeouts 00:13:02.069 ************************************ 00:13:02.069 11:21:23 -- spdk/autotest.sh@239 -- # uname -s 00:13:02.069 11:21:23 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:02.069 11:21:23 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:02.069 11:21:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.069 11:21:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.069 11:21:23 -- common/autotest_common.sh@10 -- # set +x 00:13:02.069 ************************************ 00:13:02.069 START TEST sw_hotplug 00:13:02.069 ************************************ 00:13:02.069 11:21:23 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:02.069 * Looking for test storage... 00:13:02.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:02.069 11:21:23 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:02.069 11:21:23 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:02.069 11:21:23 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:02.069 11:21:23 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:02.069 11:21:23 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.070 11:21:23 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:02.070 11:21:23 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.070 11:21:23 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.070 --rc genhtml_branch_coverage=1 00:13:02.070 --rc genhtml_function_coverage=1 00:13:02.070 --rc genhtml_legend=1 00:13:02.070 --rc geninfo_all_blocks=1 00:13:02.070 --rc geninfo_unexecuted_blocks=1 00:13:02.070 00:13:02.070 ' 00:13:02.070 11:21:23 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.070 --rc genhtml_branch_coverage=1 00:13:02.070 --rc genhtml_function_coverage=1 00:13:02.070 --rc genhtml_legend=1 00:13:02.070 --rc geninfo_all_blocks=1 00:13:02.070 --rc geninfo_unexecuted_blocks=1 00:13:02.070 00:13:02.070 ' 00:13:02.070 11:21:23 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.070 --rc genhtml_branch_coverage=1 00:13:02.070 --rc genhtml_function_coverage=1 00:13:02.070 --rc genhtml_legend=1 00:13:02.070 --rc geninfo_all_blocks=1 00:13:02.070 --rc geninfo_unexecuted_blocks=1 00:13:02.070 00:13:02.070 ' 00:13:02.070 11:21:23 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.070 --rc genhtml_branch_coverage=1 00:13:02.070 --rc genhtml_function_coverage=1 00:13:02.070 --rc genhtml_legend=1 00:13:02.070 --rc geninfo_all_blocks=1 00:13:02.070 --rc geninfo_unexecuted_blocks=1 00:13:02.070 00:13:02.070 ' 00:13:02.070 11:21:23 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.328 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:02.328 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:02.329 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:02.329 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:02.329 11:21:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:02.329 11:21:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:02.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.896 Waiting for block devices as requested 00:13:02.896 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.154 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.154 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.154 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.421 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:08.421 11:21:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:08.421 11:21:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:08.679 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:08.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.679 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:09.246 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:09.246 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.246 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:09.505 11:21:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68514 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:09.505 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:09.506 11:21:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:09.506 11:21:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:09.506 11:21:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:09.506 11:21:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:09.506 11:21:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:09.506 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:09.506 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:09.506 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:09.506 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:09.506 11:21:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:09.765 Initializing NVMe Controllers 00:13:09.765 Attaching to 0000:00:10.0 00:13:09.765 Attaching to 0000:00:11.0 00:13:09.765 Attached to 0000:00:10.0 00:13:09.765 Attached to 0000:00:11.0 00:13:09.765 Initialization complete. Starting I/O... 00:13:09.765 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:09.765 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:09.765 00:13:10.699 QEMU NVMe Ctrl (12340 ): 1038 I/Os completed (+1038) 00:13:10.699 QEMU NVMe Ctrl (12341 ): 1194 I/Os completed (+1194) 00:13:10.699 00:13:12.074 QEMU NVMe Ctrl (12340 ): 2273 I/Os completed (+1235) 00:13:12.074 QEMU NVMe Ctrl (12341 ): 2692 I/Os completed (+1498) 00:13:12.074 00:13:13.013 QEMU NVMe Ctrl (12340 ): 4572 I/Os completed (+2299) 00:13:13.013 QEMU NVMe Ctrl (12341 ): 4555 I/Os completed (+1863) 00:13:13.013 00:13:13.950 QEMU NVMe Ctrl (12340 ): 6205 I/Os completed (+1633) 00:13:13.950 QEMU NVMe Ctrl (12341 ): 6377 I/Os completed (+1822) 00:13:13.950 00:13:14.888 QEMU NVMe Ctrl (12340 ): 7687 I/Os completed (+1482) 00:13:14.888 QEMU NVMe Ctrl (12341 ): 8066 I/Os completed (+1689) 00:13:14.888 00:13:15.455 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:15.455 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:15.455 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:15.455 [2024-12-10 11:21:37.578172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:15.455 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:15.455 [2024-12-10 11:21:37.580266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.580343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.580374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.580400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:15.455 [2024-12-10 11:21:37.583329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.583391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.583416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.583441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:15.455 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:15.455 [2024-12-10 11:21:37.608346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:15.455 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:15.455 [2024-12-10 11:21:37.610193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.610250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.610282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.610308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:15.455 [2024-12-10 11:21:37.612924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.612976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.613002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.455 [2024-12-10 11:21:37.613022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:15.713 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:15.713 EAL: Scan for (pci) bus failed. 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:15.713 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:15.713 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:15.713 Attaching to 0000:00:10.0 00:13:15.713 Attached to 0000:00:10.0 00:13:15.972 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:15.972 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.972 11:21:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:15.972 Attaching to 0000:00:11.0 00:13:15.972 Attached to 0000:00:11.0 00:13:16.907 QEMU NVMe Ctrl (12340 ): 1533 I/Os completed (+1533) 00:13:16.907 QEMU NVMe Ctrl (12341 ): 1459 I/Os completed (+1459) 00:13:16.907 00:13:17.841 QEMU NVMe Ctrl (12340 ): 3078 I/Os completed (+1545) 00:13:17.841 QEMU NVMe Ctrl (12341 ): 3168 I/Os completed (+1709) 00:13:17.841 00:13:18.776 QEMU NVMe Ctrl (12340 ): 4746 I/Os completed (+1668) 00:13:18.776 QEMU NVMe Ctrl (12341 ): 4901 I/Os completed (+1733) 00:13:18.776 00:13:19.710 QEMU NVMe Ctrl (12340 ): 6232 I/Os completed (+1486) 00:13:19.710 QEMU NVMe Ctrl (12341 ): 6650 I/Os completed (+1749) 00:13:19.710 00:13:21.087 QEMU NVMe Ctrl (12340 ): 7799 I/Os completed (+1567) 00:13:21.087 QEMU NVMe Ctrl (12341 ): 8428 I/Os completed (+1778) 00:13:21.087 00:13:22.022 QEMU NVMe Ctrl (12340 ): 9407 I/Os completed (+1608) 00:13:22.022 QEMU NVMe Ctrl (12341 ): 10173 I/Os completed (+1745) 00:13:22.022 00:13:22.956 QEMU NVMe Ctrl (12340 ): 10835 I/Os completed (+1428) 00:13:22.956 QEMU NVMe Ctrl (12341 ): 11766 I/Os completed (+1593) 00:13:22.956 00:13:23.890 QEMU NVMe Ctrl (12340 ): 12363 I/Os completed (+1528) 00:13:23.890 QEMU NVMe Ctrl (12341 ): 13410 I/Os completed (+1644) 00:13:23.890 00:13:24.826 QEMU NVMe Ctrl (12340 ): 13819 I/Os completed (+1456) 00:13:24.826 QEMU NVMe Ctrl (12341 ): 15010 I/Os completed (+1600) 00:13:24.826 00:13:25.761 QEMU NVMe Ctrl (12340 ): 15456 I/Os completed (+1637) 00:13:25.761 QEMU NVMe Ctrl (12341 ): 16781 I/Os completed (+1771) 00:13:25.761 00:13:26.696 QEMU NVMe Ctrl (12340 ): 17208 I/Os completed (+1752) 00:13:26.696 QEMU NVMe Ctrl (12341 ): 18561 I/Os completed (+1780) 00:13:26.696 00:13:28.072 QEMU NVMe Ctrl (12340 ): 18904 I/Os completed (+1696) 00:13:28.072 QEMU NVMe Ctrl (12341 ): 20390 I/Os completed (+1829) 00:13:28.072 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.072 [2024-12-10 11:21:49.944905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:28.072 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:28.072 [2024-12-10 11:21:49.947481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.947562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.947598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.947649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:28.072 [2024-12-10 11:21:49.952678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.952749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.952778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.952806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.072 [2024-12-10 11:21:49.977685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:28.072 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:28.072 [2024-12-10 11:21:49.980114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.980190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.980234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.980284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:28.072 [2024-12-10 11:21:49.983669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.983732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.983762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 [2024-12-10 11:21:49.983796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:28.072 11:21:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.072 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:28.072 Attaching to 0000:00:10.0 00:13:28.072 Attached to 0000:00:10.0 00:13:28.332 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:28.332 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.332 11:21:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:28.332 Attaching to 0000:00:11.0 00:13:28.332 Attached to 0000:00:11.0 00:13:28.899 QEMU NVMe Ctrl (12340 ): 992 I/Os completed (+992) 00:13:28.899 QEMU NVMe Ctrl (12341 ): 912 I/Os completed (+912) 00:13:28.899 00:13:29.834 QEMU NVMe Ctrl (12340 ): 2682 I/Os completed (+1690) 00:13:29.834 QEMU NVMe Ctrl (12341 ): 2698 I/Os completed (+1786) 00:13:29.834 00:13:30.809 QEMU NVMe Ctrl (12340 ): 4321 I/Os completed (+1639) 00:13:30.809 QEMU NVMe Ctrl (12341 ): 4404 I/Os completed (+1706) 00:13:30.809 00:13:31.742 QEMU NVMe Ctrl (12340 ): 6203 I/Os completed (+1882) 00:13:31.742 QEMU NVMe Ctrl (12341 ): 6315 I/Os completed (+1911) 00:13:31.742 00:13:32.677 QEMU NVMe Ctrl (12340 ): 7821 I/Os completed (+1618) 00:13:32.677 QEMU NVMe Ctrl (12341 ): 8025 I/Os completed (+1710) 00:13:32.677 00:13:34.053 QEMU NVMe Ctrl (12340 ): 9325 I/Os completed (+1504) 00:13:34.053 QEMU NVMe Ctrl (12341 ): 9708 I/Os completed (+1683) 00:13:34.053 00:13:34.990 QEMU NVMe Ctrl (12340 ): 10890 I/Os completed (+1565) 00:13:34.990 QEMU NVMe Ctrl (12341 ): 11437 I/Os completed (+1729) 00:13:34.990 00:13:35.927 QEMU NVMe Ctrl (12340 ): 12561 I/Os completed (+1671) 00:13:35.927 QEMU NVMe Ctrl (12341 ): 13131 I/Os completed (+1694) 00:13:35.927 00:13:36.862 QEMU NVMe Ctrl (12340 ): 14357 I/Os completed (+1796) 00:13:36.862 QEMU NVMe Ctrl (12341 ): 14975 I/Os completed (+1844) 00:13:36.862 00:13:37.797 QEMU NVMe Ctrl (12340 ): 15905 I/Os completed (+1548) 00:13:37.797 QEMU NVMe Ctrl (12341 ): 16676 I/Os completed (+1701) 00:13:37.797 00:13:38.733 QEMU NVMe Ctrl (12340 ): 17603 I/Os completed (+1698) 00:13:38.733 QEMU NVMe Ctrl (12341 ): 18529 I/Os completed (+1853) 00:13:38.733 00:13:39.667 QEMU NVMe Ctrl (12340 ): 19223 I/Os completed (+1620) 00:13:39.667 QEMU NVMe Ctrl (12341 ): 20309 I/Os completed (+1780) 00:13:39.667 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:40.234 [2024-12-10 11:22:02.271799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:40.234 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:40.234 [2024-12-10 11:22:02.274181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.274267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.274308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.274344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:40.234 [2024-12-10 11:22:02.278124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.278199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.278228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.278254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:40.234 [2024-12-10 11:22:02.302066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:40.234 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:40.234 [2024-12-10 11:22:02.304265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.304347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.304384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.304413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:40.234 [2024-12-10 11:22:02.307519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.307582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.307616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 [2024-12-10 11:22:02.307657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:40.234 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:40.493 Attaching to 0000:00:10.0 00:13:40.493 Attached to 0000:00:10.0 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.493 11:22:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:40.493 Attaching to 0000:00:11.0 00:13:40.493 Attached to 0000:00:11.0 00:13:40.493 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:40.493 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:40.493 [2024-12-10 11:22:02.593445] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:52.697 11:22:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:52.697 11:22:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:52.697 11:22:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.01 00:13:52.697 11:22:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.01 00:13:52.697 11:22:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:52.697 11:22:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.01 00:13:52.697 11:22:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.01 2 00:13:52.697 remove_attach_helper took 43.01s to complete (handling 2 nvme drive(s)) 11:22:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68514 00:13:59.257 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68514) - No such process 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68514 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69054 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:59.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:59.257 11:22:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69054 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69054 ']' 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.257 11:22:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:59.257 [2024-12-10 11:22:20.713709] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:13:59.257 [2024-12-10 11:22:20.713883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69054 ] 00:13:59.257 [2024-12-10 11:22:20.890008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.257 [2024-12-10 11:22:20.993582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:59.824 11:22:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:59.824 11:22:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:06.386 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.386 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.387 11:22:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.387 11:22:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.387 [2024-12-10 11:22:27.861428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:06.387 [2024-12-10 11:22:27.864185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:27.864240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:27.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:27.864295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:27.864311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:27.864327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:27.864342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:27.864357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:27.864370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:27.864391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:27.864405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:27.864420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 11:22:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:06.387 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:06.387 [2024-12-10 11:22:28.361453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:06.387 [2024-12-10 11:22:28.364504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:28.364707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:28.364896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:28.365182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:28.365421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:28.365450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:28.365473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:28.365488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:28.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 [2024-12-10 11:22:28.365519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.387 [2024-12-10 11:22:28.365534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.387 [2024-12-10 11:22:28.365548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.387 11:22:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.387 11:22:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.387 11:22:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.387 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:06.646 11:22:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.849 11:22:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.849 11:22:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.849 11:22:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.849 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.849 [2024-12-10 11:22:40.761975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:18.849 [2024-12-10 11:22:40.765308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.850 [2024-12-10 11:22:40.765490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.850 [2024-12-10 11:22:40.765722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.850 [2024-12-10 11:22:40.765914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.850 [2024-12-10 11:22:40.766138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.850 [2024-12-10 11:22:40.766326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.850 [2024-12-10 11:22:40.766548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.850 [2024-12-10 11:22:40.766725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.850 [2024-12-10 11:22:40.766810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.850 [2024-12-10 11:22:40.766944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.850 [2024-12-10 11:22:40.766995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.850 [2024-12-10 11:22:40.767177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.850 11:22:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.850 11:22:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.850 11:22:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:18.850 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:19.108 [2024-12-10 11:22:41.161994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:19.108 [2024-12-10 11:22:41.164964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.108 [2024-12-10 11:22:41.165139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.108 [2024-12-10 11:22:41.165315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.108 [2024-12-10 11:22:41.165567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.108 [2024-12-10 11:22:41.165757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.108 [2024-12-10 11:22:41.165935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.108 [2024-12-10 11:22:41.166142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.108 [2024-12-10 11:22:41.166310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.108 [2024-12-10 11:22:41.166481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.108 [2024-12-10 11:22:41.166717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.108 [2024-12-10 11:22:41.166925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.108 [2024-12-10 11:22:41.167079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.367 11:22:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.367 11:22:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.367 11:22:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.367 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:19.660 11:22:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.862 [2024-12-10 11:22:53.862474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:31.862 11:22:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.862 [2024-12-10 11:22:53.865329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.862 [2024-12-10 11:22:53.865496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.862 [2024-12-10 11:22:53.865690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.862 [2024-12-10 11:22:53.865857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.862 [2024-12-10 11:22:53.866021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.862 [2024-12-10 11:22:53.866172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.862 [2024-12-10 11:22:53.866373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.862 [2024-12-10 11:22:53.866505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.862 [2024-12-10 11:22:53.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.862 [2024-12-10 11:22:53.866839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.862 [2024-12-10 11:22:53.867006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.862 [2024-12-10 11:22:53.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:31.862 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:32.121 [2024-12-10 11:22:54.262507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:32.121 [2024-12-10 11:22:54.265732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.121 [2024-12-10 11:22:54.265919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.121 [2024-12-10 11:22:54.266086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.121 [2024-12-10 11:22:54.266392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.121 [2024-12-10 11:22:54.266570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.121 [2024-12-10 11:22:54.266779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.121 [2024-12-10 11:22:54.266950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.121 [2024-12-10 11:22:54.267175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.121 [2024-12-10 11:22:54.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.121 [2024-12-10 11:22:54.267593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.121 [2024-12-10 11:22:54.267806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.121 [2024-12-10 11:22:54.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:32.380 11:22:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.380 11:22:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.380 11:22:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:32.380 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:32.639 11:22:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.01 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.01 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.01 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.01 2 00:14:44.841 remove_attach_helper took 45.01s to complete (handling 2 nvme drive(s)) 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:44.841 11:23:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:44.841 11:23:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.462 11:23:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.462 11:23:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.462 11:23:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.462 [2024-12-10 11:23:12.901833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:51.462 [2024-12-10 11:23:12.903953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:12.904137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:12.904315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:12.904511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:12.904718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:12.904892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:12.905088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:12.905275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:12.905432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:12.905599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:12.905765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:12.905837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:51.462 11:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:51.462 [2024-12-10 11:23:13.301811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:51.462 [2024-12-10 11:23:13.304522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:13.304731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:13.304772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:13.304801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:13.304820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:13.304834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:13.304851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:13.304864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:13.304880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 [2024-12-10 11:23:13.304895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.462 [2024-12-10 11:23:13.304912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.462 [2024-12-10 11:23:13.304925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.462 11:23:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.462 11:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.462 11:23:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.462 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.722 11:23:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.938 [2024-12-10 11:23:25.901996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:03.938 [2024-12-10 11:23:25.904593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.938 [2024-12-10 11:23:25.904893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.938 [2024-12-10 11:23:25.905140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.938 [2024-12-10 11:23:25.905404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.938 [2024-12-10 11:23:25.905603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.938 [2024-12-10 11:23:25.905916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.938 [2024-12-10 11:23:25.906158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.938 [2024-12-10 11:23:25.906358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.938 [2024-12-10 11:23:25.906679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.938 [2024-12-10 11:23:25.906987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.938 [2024-12-10 11:23:25.907194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.938 [2024-12-10 11:23:25.907438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.938 11:23:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:03.938 11:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:04.197 [2024-12-10 11:23:26.302018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:04.197 [2024-12-10 11:23:26.305135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.197 [2024-12-10 11:23:26.305395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.197 [2024-12-10 11:23:26.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.197 [2024-12-10 11:23:26.305996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.197 [2024-12-10 11:23:26.306222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.197 [2024-12-10 11:23:26.306468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.197 [2024-12-10 11:23:26.306726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.197 [2024-12-10 11:23:26.306941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.197 [2024-12-10 11:23:26.307108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.197 [2024-12-10 11:23:26.307331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.197 [2024-12-10 11:23:26.307616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.197 [2024-12-10 11:23:26.307892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.456 11:23:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.456 11:23:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.456 11:23:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.456 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.715 11:23:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:16.920 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:16.920 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:16.920 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:16.920 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.920 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 [2024-12-10 11:23:38.902151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:16.921 [2024-12-10 11:23:38.904329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.921 [2024-12-10 11:23:38.904389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.921 [2024-12-10 11:23:38.904411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.921 [2024-12-10 11:23:38.904441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.921 [2024-12-10 11:23:38.904457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.921 [2024-12-10 11:23:38.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.921 [2024-12-10 11:23:38.904488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.921 [2024-12-10 11:23:38.904507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.921 [2024-12-10 11:23:38.904521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.921 [2024-12-10 11:23:38.904537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.921 [2024-12-10 11:23:38.904550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.921 [2024-12-10 11:23:38.904569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.921 11:23:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:16.921 11:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:17.180 [2024-12-10 11:23:39.302168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:17.180 [2024-12-10 11:23:39.305266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.180 [2024-12-10 11:23:39.305320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.180 [2024-12-10 11:23:39.305345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.180 [2024-12-10 11:23:39.305371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.180 [2024-12-10 11:23:39.305391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.180 [2024-12-10 11:23:39.305405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.180 [2024-12-10 11:23:39.305423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.180 [2024-12-10 11:23:39.305436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.180 [2024-12-10 11:23:39.305452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.180 [2024-12-10 11:23:39.305469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.180 [2024-12-10 11:23:39.305501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.180 [2024-12-10 11:23:39.305522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.439 11:23:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.439 11:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.439 11:23:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:17.439 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.697 11:23:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.04 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.04 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:15:29.944 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:29.944 11:23:51 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69054 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69054 ']' 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69054 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69054 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69054' 00:15:29.944 killing process with pid 69054 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69054 00:15:29.944 11:23:51 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69054 00:15:31.844 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.670 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.670 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.929 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.929 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.929 00:15:32.929 real 2m31.259s 00:15:32.929 user 1m50.933s 00:15:32.929 sys 0m19.975s 00:15:32.929 ************************************ 00:15:32.929 END TEST sw_hotplug 00:15:32.929 ************************************ 00:15:32.929 11:23:54 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.929 11:23:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:32.929 11:23:55 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:32.929 11:23:55 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:32.929 11:23:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.929 11:23:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.929 11:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.929 ************************************ 00:15:32.929 START TEST nvme_xnvme 00:15:32.929 ************************************ 00:15:32.929 11:23:55 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:33.189 * Looking for test storage... 00:15:33.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.189 11:23:55 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.189 --rc genhtml_branch_coverage=1 00:15:33.189 --rc genhtml_function_coverage=1 00:15:33.189 --rc genhtml_legend=1 00:15:33.189 --rc geninfo_all_blocks=1 00:15:33.189 --rc geninfo_unexecuted_blocks=1 00:15:33.189 00:15:33.189 ' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.189 --rc genhtml_branch_coverage=1 00:15:33.189 --rc genhtml_function_coverage=1 00:15:33.189 --rc genhtml_legend=1 00:15:33.189 --rc geninfo_all_blocks=1 00:15:33.189 --rc geninfo_unexecuted_blocks=1 00:15:33.189 00:15:33.189 ' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.189 --rc genhtml_branch_coverage=1 00:15:33.189 --rc genhtml_function_coverage=1 00:15:33.189 --rc genhtml_legend=1 00:15:33.189 --rc geninfo_all_blocks=1 00:15:33.189 --rc geninfo_unexecuted_blocks=1 00:15:33.189 00:15:33.189 ' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.189 --rc genhtml_branch_coverage=1 00:15:33.189 --rc genhtml_function_coverage=1 00:15:33.189 --rc genhtml_legend=1 00:15:33.189 --rc geninfo_all_blocks=1 00:15:33.189 --rc geninfo_unexecuted_blocks=1 00:15:33.189 00:15:33.189 ' 00:15:33.189 11:23:55 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:33.189 11:23:55 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:33.189 11:23:55 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:33.189 11:23:55 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:33.190 11:23:55 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:33.190 11:23:55 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:33.190 #define SPDK_CONFIG_H 00:15:33.190 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:33.190 #define SPDK_CONFIG_APPS 1 00:15:33.190 #define SPDK_CONFIG_ARCH native 00:15:33.190 #define SPDK_CONFIG_ASAN 1 00:15:33.190 #undef SPDK_CONFIG_AVAHI 00:15:33.190 #undef SPDK_CONFIG_CET 00:15:33.190 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:33.190 #define SPDK_CONFIG_COVERAGE 1 00:15:33.190 #define SPDK_CONFIG_CROSS_PREFIX 00:15:33.190 #undef SPDK_CONFIG_CRYPTO 00:15:33.190 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:33.190 #undef SPDK_CONFIG_CUSTOMOCF 00:15:33.190 #undef SPDK_CONFIG_DAOS 00:15:33.190 #define SPDK_CONFIG_DAOS_DIR 00:15:33.190 #define SPDK_CONFIG_DEBUG 1 00:15:33.190 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:33.190 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:33.190 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:33.190 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:33.190 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:33.190 #undef SPDK_CONFIG_DPDK_UADK 00:15:33.190 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:33.190 #define SPDK_CONFIG_EXAMPLES 1 00:15:33.190 #undef SPDK_CONFIG_FC 00:15:33.190 #define SPDK_CONFIG_FC_PATH 00:15:33.190 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:33.190 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:33.190 #define SPDK_CONFIG_FSDEV 1 00:15:33.190 #undef SPDK_CONFIG_FUSE 00:15:33.190 #undef SPDK_CONFIG_FUZZER 00:15:33.190 #define SPDK_CONFIG_FUZZER_LIB 00:15:33.190 #undef SPDK_CONFIG_GOLANG 00:15:33.190 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:33.190 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:33.190 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:33.190 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:33.190 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:33.190 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:33.190 #undef SPDK_CONFIG_HAVE_LZ4 00:15:33.190 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:33.190 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:33.190 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:33.190 #define SPDK_CONFIG_IDXD 1 00:15:33.190 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:33.190 #undef SPDK_CONFIG_IPSEC_MB 00:15:33.190 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:33.190 #define SPDK_CONFIG_ISAL 1 00:15:33.190 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:33.190 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:33.190 #define SPDK_CONFIG_LIBDIR 00:15:33.190 #undef SPDK_CONFIG_LTO 00:15:33.190 #define SPDK_CONFIG_MAX_LCORES 128 00:15:33.190 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:33.190 #define SPDK_CONFIG_NVME_CUSE 1 00:15:33.190 #undef SPDK_CONFIG_OCF 00:15:33.190 #define SPDK_CONFIG_OCF_PATH 00:15:33.190 #define SPDK_CONFIG_OPENSSL_PATH 00:15:33.190 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:33.190 #define SPDK_CONFIG_PGO_DIR 00:15:33.190 #undef SPDK_CONFIG_PGO_USE 00:15:33.190 #define SPDK_CONFIG_PREFIX /usr/local 00:15:33.190 #undef SPDK_CONFIG_RAID5F 00:15:33.190 #undef SPDK_CONFIG_RBD 00:15:33.190 #define SPDK_CONFIG_RDMA 1 00:15:33.190 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:33.190 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:33.190 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:33.190 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:33.190 #define SPDK_CONFIG_SHARED 1 00:15:33.190 #undef SPDK_CONFIG_SMA 00:15:33.190 #define SPDK_CONFIG_TESTS 1 00:15:33.190 #undef SPDK_CONFIG_TSAN 00:15:33.190 #define SPDK_CONFIG_UBLK 1 00:15:33.190 #define SPDK_CONFIG_UBSAN 1 00:15:33.190 #undef SPDK_CONFIG_UNIT_TESTS 00:15:33.190 #undef SPDK_CONFIG_URING 00:15:33.190 #define SPDK_CONFIG_URING_PATH 00:15:33.190 #undef SPDK_CONFIG_URING_ZNS 00:15:33.190 #undef SPDK_CONFIG_USDT 00:15:33.190 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:33.190 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:33.190 #undef SPDK_CONFIG_VFIO_USER 00:15:33.190 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:33.190 #define SPDK_CONFIG_VHOST 1 00:15:33.190 #define SPDK_CONFIG_VIRTIO 1 00:15:33.190 #undef SPDK_CONFIG_VTUNE 00:15:33.190 #define SPDK_CONFIG_VTUNE_DIR 00:15:33.190 #define SPDK_CONFIG_WERROR 1 00:15:33.190 #define SPDK_CONFIG_WPDK_DIR 00:15:33.190 #define SPDK_CONFIG_XNVME 1 00:15:33.190 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:33.190 11:23:55 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:33.190 11:23:55 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.190 11:23:55 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.190 11:23:55 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.190 11:23:55 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.190 11:23:55 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.190 11:23:55 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.190 11:23:55 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.190 11:23:55 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.190 11:23:55 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:33.190 11:23:55 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.190 11:23:55 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:33.190 11:23:55 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:33.191 11:23:55 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:33.191 11:23:55 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70392 ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70392 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.jaaEI7 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.jaaEI7/tests/xnvme /tmp/spdk.jaaEI7 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975945216 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591810048 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975945216 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591810048 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.192 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:33.193 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:33.193 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:33.193 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:33.193 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:33.193 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95627763712 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4075016192 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:33.451 * Looking for test storage... 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975945216 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:33.451 11:23:55 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.452 --rc genhtml_branch_coverage=1 00:15:33.452 --rc genhtml_function_coverage=1 00:15:33.452 --rc genhtml_legend=1 00:15:33.452 --rc geninfo_all_blocks=1 00:15:33.452 --rc geninfo_unexecuted_blocks=1 00:15:33.452 00:15:33.452 ' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.452 --rc genhtml_branch_coverage=1 00:15:33.452 --rc genhtml_function_coverage=1 00:15:33.452 --rc genhtml_legend=1 00:15:33.452 --rc geninfo_all_blocks=1 00:15:33.452 --rc geninfo_unexecuted_blocks=1 00:15:33.452 00:15:33.452 ' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.452 --rc genhtml_branch_coverage=1 00:15:33.452 --rc genhtml_function_coverage=1 00:15:33.452 --rc genhtml_legend=1 00:15:33.452 --rc geninfo_all_blocks=1 00:15:33.452 --rc geninfo_unexecuted_blocks=1 00:15:33.452 00:15:33.452 ' 00:15:33.452 11:23:55 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.452 --rc genhtml_branch_coverage=1 00:15:33.452 --rc genhtml_function_coverage=1 00:15:33.452 --rc genhtml_legend=1 00:15:33.452 --rc geninfo_all_blocks=1 00:15:33.452 --rc geninfo_unexecuted_blocks=1 00:15:33.452 00:15:33.452 ' 00:15:33.452 11:23:55 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.452 11:23:55 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.452 11:23:55 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.452 11:23:55 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.452 11:23:55 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.452 11:23:55 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:33.452 11:23:55 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:33.452 11:23:55 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:33.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:33.969 Waiting for block devices as requested 00:15:33.969 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:33.969 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:34.227 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:34.227 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:39.526 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:39.526 11:24:01 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:39.784 11:24:01 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:39.784 11:24:01 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:39.784 11:24:01 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:39.784 11:24:01 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:39.784 11:24:01 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:39.784 11:24:01 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:39.784 11:24:01 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:39.784 No valid GPT data, bailing 00:15:40.042 11:24:01 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:40.042 11:24:01 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:40.042 11:24:01 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:40.042 11:24:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:40.042 11:24:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.042 11:24:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.042 11:24:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.042 ************************************ 00:15:40.042 START TEST xnvme_rpc 00:15:40.042 ************************************ 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70779 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70779 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70779 ']' 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.042 11:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.042 [2024-12-10 11:24:02.103530] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:40.042 [2024-12-10 11:24:02.103724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70779 ] 00:15:40.301 [2024-12-10 11:24:02.288918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.301 [2024-12-10 11:24:02.416016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.236 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.237 xnvme_bdev 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.237 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70779 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70779 ']' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70779 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70779 00:15:41.495 killing process with pid 70779 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70779' 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70779 00:15:41.495 11:24:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70779 00:15:44.025 ************************************ 00:15:44.025 END TEST xnvme_rpc 00:15:44.025 ************************************ 00:15:44.025 00:15:44.025 real 0m3.681s 00:15:44.025 user 0m4.000s 00:15:44.025 sys 0m0.470s 00:15:44.025 11:24:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.025 11:24:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 11:24:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:44.025 11:24:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.025 11:24:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.025 11:24:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 ************************************ 00:15:44.025 START TEST xnvme_bdevperf 00:15:44.025 ************************************ 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:44.025 11:24:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 { 00:15:44.025 "subsystems": [ 00:15:44.025 { 00:15:44.025 "subsystem": "bdev", 00:15:44.025 "config": [ 00:15:44.025 { 00:15:44.025 "params": { 00:15:44.025 "io_mechanism": "libaio", 00:15:44.025 "conserve_cpu": false, 00:15:44.025 "filename": "/dev/nvme0n1", 00:15:44.025 "name": "xnvme_bdev" 00:15:44.025 }, 00:15:44.025 "method": "bdev_xnvme_create" 00:15:44.025 }, 00:15:44.025 { 00:15:44.025 "method": "bdev_wait_for_examine" 00:15:44.025 } 00:15:44.025 ] 00:15:44.025 } 00:15:44.025 ] 00:15:44.025 } 00:15:44.025 [2024-12-10 11:24:05.824796] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:44.026 [2024-12-10 11:24:05.824941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70859 ] 00:15:44.026 [2024-12-10 11:24:05.996371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.026 [2024-12-10 11:24:06.158682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.594 Running I/O for 5 seconds... 00:15:46.501 26063.00 IOPS, 101.81 MiB/s [2024-12-10T11:24:09.602Z] 25850.00 IOPS, 100.98 MiB/s [2024-12-10T11:24:10.536Z] 26002.67 IOPS, 101.57 MiB/s [2024-12-10T11:24:11.912Z] 25350.00 IOPS, 99.02 MiB/s 00:15:49.745 Latency(us) 00:15:49.745 [2024-12-10T11:24:11.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.745 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:49.745 xnvme_bdev : 5.00 24829.75 96.99 0.00 0.00 2571.03 213.18 5600.35 00:15:49.745 [2024-12-10T11:24:11.912Z] =================================================================================================================== 00:15:49.745 [2024-12-10T11:24:11.912Z] Total : 24829.75 96.99 0.00 0.00 2571.03 213.18 5600.35 00:15:50.681 11:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:50.681 11:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:50.681 11:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:50.681 11:24:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:50.681 11:24:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:50.681 { 00:15:50.681 "subsystems": [ 00:15:50.681 { 00:15:50.681 "subsystem": "bdev", 00:15:50.681 "config": [ 00:15:50.681 { 00:15:50.681 "params": { 00:15:50.681 "io_mechanism": "libaio", 00:15:50.681 "conserve_cpu": false, 00:15:50.681 "filename": "/dev/nvme0n1", 00:15:50.681 "name": "xnvme_bdev" 00:15:50.681 }, 00:15:50.681 "method": "bdev_xnvme_create" 00:15:50.681 }, 00:15:50.681 { 00:15:50.681 "method": "bdev_wait_for_examine" 00:15:50.681 } 00:15:50.681 ] 00:15:50.681 } 00:15:50.681 ] 00:15:50.681 } 00:15:50.681 [2024-12-10 11:24:12.602340] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:50.681 [2024-12-10 11:24:12.602531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70940 ] 00:15:50.681 [2024-12-10 11:24:12.785292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.939 [2024-12-10 11:24:12.913799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.197 Running I/O for 5 seconds... 00:15:53.509 23277.00 IOPS, 90.93 MiB/s [2024-12-10T11:24:16.612Z] 23183.00 IOPS, 90.56 MiB/s [2024-12-10T11:24:17.547Z] 23250.00 IOPS, 90.82 MiB/s [2024-12-10T11:24:18.509Z] 23390.50 IOPS, 91.37 MiB/s [2024-12-10T11:24:18.509Z] 23206.40 IOPS, 90.65 MiB/s 00:15:56.342 Latency(us) 00:15:56.342 [2024-12-10T11:24:18.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.342 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:56.342 xnvme_bdev : 5.01 23182.62 90.56 0.00 0.00 2752.12 404.01 6642.97 00:15:56.342 [2024-12-10T11:24:18.509Z] =================================================================================================================== 00:15:56.342 [2024-12-10T11:24:18.509Z] Total : 23182.62 90.56 0.00 0.00 2752.12 404.01 6642.97 00:15:57.277 00:15:57.277 real 0m13.586s 00:15:57.277 user 0m5.265s 00:15:57.277 sys 0m5.837s 00:15:57.277 11:24:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.277 ************************************ 00:15:57.277 END TEST xnvme_bdevperf 00:15:57.277 ************************************ 00:15:57.277 11:24:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 11:24:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:57.277 11:24:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.277 11:24:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.277 11:24:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 ************************************ 00:15:57.277 START TEST xnvme_fio_plugin 00:15:57.277 ************************************ 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.277 11:24:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.277 { 00:15:57.277 "subsystems": [ 00:15:57.277 { 00:15:57.277 "subsystem": "bdev", 00:15:57.277 "config": [ 00:15:57.277 { 00:15:57.277 "params": { 00:15:57.277 "io_mechanism": "libaio", 00:15:57.277 "conserve_cpu": false, 00:15:57.277 "filename": "/dev/nvme0n1", 00:15:57.277 "name": "xnvme_bdev" 00:15:57.277 }, 00:15:57.277 "method": "bdev_xnvme_create" 00:15:57.277 }, 00:15:57.277 { 00:15:57.277 "method": "bdev_wait_for_examine" 00:15:57.277 } 00:15:57.277 ] 00:15:57.277 } 00:15:57.277 ] 00:15:57.277 } 00:15:57.535 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:57.535 fio-3.35 00:15:57.535 Starting 1 thread 00:16:04.093 00:16:04.093 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71063: Tue Dec 10 11:24:25 2024 00:16:04.093 read: IOPS=26.3k, BW=103MiB/s (108MB/s)(514MiB/5001msec) 00:16:04.093 slat (usec): min=5, max=1246, avg=33.83, stdev=29.86 00:16:04.093 clat (usec): min=114, max=5560, avg=1352.82, stdev=766.21 00:16:04.093 lat (usec): min=161, max=5675, avg=1386.65, stdev=769.80 00:16:04.093 clat percentiles (usec): 00:16:04.093 | 1.00th=[ 237], 5.00th=[ 351], 10.00th=[ 461], 20.00th=[ 668], 00:16:04.093 | 30.00th=[ 857], 40.00th=[ 1037], 50.00th=[ 1221], 60.00th=[ 1434], 00:16:04.093 | 70.00th=[ 1680], 80.00th=[ 1975], 90.00th=[ 2409], 95.00th=[ 2769], 00:16:04.093 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4424], 99.95th=[ 4621], 00:16:04.093 | 99.99th=[ 5014] 00:16:04.093 bw ( KiB/s): min=88824, max=132568, per=100.00%, avg=105193.22, stdev=13280.19, samples=9 00:16:04.093 iops : min=22206, max=33142, avg=26298.22, stdev=3320.05, samples=9 00:16:04.093 lat (usec) : 250=1.34%, 500=10.43%, 750=12.62%, 1000=13.74% 00:16:04.093 lat (msec) : 2=42.68%, 4=18.77%, 10=0.42% 00:16:04.093 cpu : usr=24.80%, sys=53.62%, ctx=128, majf=0, minf=764 00:16:04.093 IO depths : 1=0.1%, 2=1.5%, 4=5.0%, 8=11.8%, 16=25.8%, 32=54.0%, >=64=1.7% 00:16:04.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.093 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:04.093 issued rwts: total=131466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:04.093 00:16:04.093 Run status group 0 (all jobs): 00:16:04.093 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=514MiB (538MB), run=5001-5001msec 00:16:04.659 ----------------------------------------------------- 00:16:04.659 Suppressions used: 00:16:04.659 count bytes template 00:16:04.659 1 11 /usr/src/fio/parse.c 00:16:04.659 1 8 libtcmalloc_minimal.so 00:16:04.659 1 904 libcrypto.so 00:16:04.659 ----------------------------------------------------- 00:16:04.659 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:04.659 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:04.918 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.918 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.918 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:04.918 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.918 11:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.918 { 00:16:04.918 "subsystems": [ 00:16:04.918 { 00:16:04.918 "subsystem": "bdev", 00:16:04.918 "config": [ 00:16:04.918 { 00:16:04.918 "params": { 00:16:04.918 "io_mechanism": "libaio", 00:16:04.918 "conserve_cpu": false, 00:16:04.918 "filename": "/dev/nvme0n1", 00:16:04.918 "name": "xnvme_bdev" 00:16:04.918 }, 00:16:04.918 "method": "bdev_xnvme_create" 00:16:04.918 }, 00:16:04.918 { 00:16:04.918 "method": "bdev_wait_for_examine" 00:16:04.918 } 00:16:04.918 ] 00:16:04.918 } 00:16:04.918 ] 00:16:04.918 } 00:16:04.918 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:04.918 fio-3.35 00:16:04.918 Starting 1 thread 00:16:11.476 00:16:11.476 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71155: Tue Dec 10 11:24:32 2024 00:16:11.476 write: IOPS=23.8k, BW=92.8MiB/s (97.3MB/s)(464MiB/5001msec); 0 zone resets 00:16:11.476 slat (usec): min=5, max=2053, avg=37.57, stdev=30.47 00:16:11.476 clat (usec): min=119, max=5675, avg=1486.35, stdev=825.16 00:16:11.476 lat (usec): min=189, max=5747, avg=1523.92, stdev=828.09 00:16:11.476 clat percentiles (usec): 00:16:11.476 | 1.00th=[ 251], 5.00th=[ 371], 10.00th=[ 494], 20.00th=[ 725], 00:16:11.476 | 30.00th=[ 938], 40.00th=[ 1139], 50.00th=[ 1352], 60.00th=[ 1598], 00:16:11.476 | 70.00th=[ 1876], 80.00th=[ 2212], 90.00th=[ 2606], 95.00th=[ 2933], 00:16:11.476 | 99.00th=[ 3818], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[ 4883], 00:16:11.476 | 99.99th=[ 5211] 00:16:11.476 bw ( KiB/s): min=85312, max=112224, per=100.00%, avg=96242.67, stdev=9412.61, samples=9 00:16:11.476 iops : min=21328, max=28056, avg=24060.67, stdev=2353.15, samples=9 00:16:11.476 lat (usec) : 250=0.97%, 500=9.29%, 750=10.97%, 1000=11.88% 00:16:11.476 lat (msec) : 2=40.63%, 4=25.58%, 10=0.68% 00:16:11.476 cpu : usr=25.50%, sys=51.98%, ctx=45, majf=0, minf=765 00:16:11.476 IO depths : 1=0.1%, 2=1.6%, 4=5.3%, 8=12.0%, 16=25.7%, 32=53.6%, >=64=1.7% 00:16:11.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.476 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:11.476 issued rwts: total=0,118809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:11.476 00:16:11.476 Run status group 0 (all jobs): 00:16:11.476 WRITE: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=464MiB (487MB), run=5001-5001msec 00:16:12.069 ----------------------------------------------------- 00:16:12.069 Suppressions used: 00:16:12.069 count bytes template 00:16:12.069 1 11 /usr/src/fio/parse.c 00:16:12.069 1 8 libtcmalloc_minimal.so 00:16:12.069 1 904 libcrypto.so 00:16:12.069 ----------------------------------------------------- 00:16:12.069 00:16:12.069 00:16:12.069 real 0m14.804s 00:16:12.069 user 0m6.368s 00:16:12.069 sys 0m5.894s 00:16:12.069 ************************************ 00:16:12.069 END TEST xnvme_fio_plugin 00:16:12.069 ************************************ 00:16:12.069 11:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.069 11:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:12.069 11:24:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:12.069 11:24:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:12.069 11:24:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:12.069 11:24:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:12.069 11:24:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.069 11:24:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.069 11:24:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.069 ************************************ 00:16:12.069 START TEST xnvme_rpc 00:16:12.069 ************************************ 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:12.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71247 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71247 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71247 ']' 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.069 11:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.329 [2024-12-10 11:24:34.364685] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:12.329 [2024-12-10 11:24:34.365193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ] 00:16:12.587 [2024-12-10 11:24:34.551306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.587 [2024-12-10 11:24:34.675416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 xnvme_bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71247 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71247 ']' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71247 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.521 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71247 00:16:13.780 killing process with pid 71247 00:16:13.780 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.780 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.780 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71247' 00:16:13.780 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71247 00:16:13.780 11:24:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71247 00:16:15.681 ************************************ 00:16:15.681 END TEST xnvme_rpc 00:16:15.681 ************************************ 00:16:15.681 00:16:15.681 real 0m3.568s 00:16:15.681 user 0m3.864s 00:16:15.681 sys 0m0.454s 00:16:15.681 11:24:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.681 11:24:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.681 11:24:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:15.681 11:24:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.681 11:24:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.681 11:24:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.681 ************************************ 00:16:15.681 START TEST xnvme_bdevperf 00:16:15.681 ************************************ 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:15.681 11:24:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:15.940 { 00:16:15.940 "subsystems": [ 00:16:15.940 { 00:16:15.940 "subsystem": "bdev", 00:16:15.940 "config": [ 00:16:15.940 { 00:16:15.940 "params": { 00:16:15.940 "io_mechanism": "libaio", 00:16:15.940 "conserve_cpu": true, 00:16:15.940 "filename": "/dev/nvme0n1", 00:16:15.940 "name": "xnvme_bdev" 00:16:15.940 }, 00:16:15.940 "method": "bdev_xnvme_create" 00:16:15.940 }, 00:16:15.940 { 00:16:15.940 "method": "bdev_wait_for_examine" 00:16:15.940 } 00:16:15.940 ] 00:16:15.940 } 00:16:15.940 ] 00:16:15.940 } 00:16:15.940 [2024-12-10 11:24:37.928719] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:15.940 [2024-12-10 11:24:37.928891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71325 ] 00:16:16.198 [2024-12-10 11:24:38.115916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.198 [2024-12-10 11:24:38.218196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.457 Running I/O for 5 seconds... 00:16:18.771 25109.00 IOPS, 98.08 MiB/s [2024-12-10T11:24:41.554Z] 26495.00 IOPS, 103.50 MiB/s [2024-12-10T11:24:42.929Z] 25367.00 IOPS, 99.09 MiB/s [2024-12-10T11:24:43.863Z] 24937.75 IOPS, 97.41 MiB/s 00:16:21.696 Latency(us) 00:16:21.696 [2024-12-10T11:24:43.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.696 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:21.696 xnvme_bdev : 5.00 25235.91 98.58 0.00 0.00 2529.74 211.32 5749.29 00:16:21.696 [2024-12-10T11:24:43.863Z] =================================================================================================================== 00:16:21.696 [2024-12-10T11:24:43.863Z] Total : 25235.91 98.58 0.00 0.00 2529.74 211.32 5749.29 00:16:22.629 11:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:22.629 11:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:22.629 11:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:22.629 11:24:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:22.629 11:24:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 { 00:16:22.629 "subsystems": [ 00:16:22.629 { 00:16:22.629 "subsystem": "bdev", 00:16:22.629 "config": [ 00:16:22.629 { 00:16:22.629 "params": { 00:16:22.629 "io_mechanism": "libaio", 00:16:22.629 "conserve_cpu": true, 00:16:22.629 "filename": "/dev/nvme0n1", 00:16:22.629 "name": "xnvme_bdev" 00:16:22.629 }, 00:16:22.629 "method": "bdev_xnvme_create" 00:16:22.629 }, 00:16:22.629 { 00:16:22.629 "method": "bdev_wait_for_examine" 00:16:22.629 } 00:16:22.629 ] 00:16:22.629 } 00:16:22.629 ] 00:16:22.629 } 00:16:22.629 [2024-12-10 11:24:44.652854] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:22.629 [2024-12-10 11:24:44.653254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:16:22.887 [2024-12-10 11:24:44.831610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.887 [2024-12-10 11:24:44.934133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.144 Running I/O for 5 seconds... 00:16:25.460 21625.00 IOPS, 84.47 MiB/s [2024-12-10T11:24:48.561Z] 22719.00 IOPS, 88.75 MiB/s [2024-12-10T11:24:49.495Z] 23209.00 IOPS, 90.66 MiB/s [2024-12-10T11:24:50.430Z] 23181.50 IOPS, 90.55 MiB/s [2024-12-10T11:24:50.430Z] 22937.00 IOPS, 89.60 MiB/s 00:16:28.263 Latency(us) 00:16:28.263 [2024-12-10T11:24:50.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.263 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:28.263 xnvme_bdev : 5.01 22923.27 89.54 0.00 0.00 2784.13 655.36 5838.66 00:16:28.263 [2024-12-10T11:24:50.430Z] =================================================================================================================== 00:16:28.263 [2024-12-10T11:24:50.430Z] Total : 22923.27 89.54 0.00 0.00 2784.13 655.36 5838.66 00:16:29.197 00:16:29.197 real 0m13.501s 00:16:29.197 user 0m5.131s 00:16:29.197 sys 0m5.810s 00:16:29.197 11:24:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.197 11:24:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:29.197 ************************************ 00:16:29.197 END TEST xnvme_bdevperf 00:16:29.197 ************************************ 00:16:29.456 11:24:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:29.456 11:24:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.456 11:24:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.456 11:24:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 ************************************ 00:16:29.456 START TEST xnvme_fio_plugin 00:16:29.456 ************************************ 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:29.456 11:24:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.456 { 00:16:29.456 "subsystems": [ 00:16:29.456 { 00:16:29.456 "subsystem": "bdev", 00:16:29.456 "config": [ 00:16:29.456 { 00:16:29.456 "params": { 00:16:29.456 "io_mechanism": "libaio", 00:16:29.456 "conserve_cpu": true, 00:16:29.456 "filename": "/dev/nvme0n1", 00:16:29.456 "name": "xnvme_bdev" 00:16:29.456 }, 00:16:29.456 "method": "bdev_xnvme_create" 00:16:29.456 }, 00:16:29.456 { 00:16:29.456 "method": "bdev_wait_for_examine" 00:16:29.456 } 00:16:29.456 ] 00:16:29.456 } 00:16:29.456 ] 00:16:29.456 } 00:16:29.456 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:29.456 fio-3.35 00:16:29.456 Starting 1 thread 00:16:36.018 00:16:36.018 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71525: Tue Dec 10 11:24:57 2024 00:16:36.018 read: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(463MiB/5001msec) 00:16:36.018 slat (usec): min=5, max=657, avg=37.71, stdev=29.86 00:16:36.018 clat (usec): min=73, max=6075, avg=1486.12, stdev=835.86 00:16:36.018 lat (usec): min=150, max=6139, avg=1523.83, stdev=839.15 00:16:36.018 clat percentiles (usec): 00:16:36.018 | 1.00th=[ 247], 5.00th=[ 363], 10.00th=[ 486], 20.00th=[ 717], 00:16:36.018 | 30.00th=[ 922], 40.00th=[ 1123], 50.00th=[ 1352], 60.00th=[ 1598], 00:16:36.018 | 70.00th=[ 1893], 80.00th=[ 2245], 90.00th=[ 2638], 95.00th=[ 2966], 00:16:36.018 | 99.00th=[ 3818], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[ 4817], 00:16:36.018 | 99.99th=[ 5145] 00:16:36.018 bw ( KiB/s): min=82808, max=114040, per=100.00%, avg=95111.11, stdev=10824.45, samples=9 00:16:36.018 iops : min=20702, max=28510, avg=23777.78, stdev=2706.11, samples=9 00:16:36.018 lat (usec) : 100=0.01%, 250=1.10%, 500=9.53%, 750=10.97%, 1000=12.16% 00:16:36.018 lat (msec) : 2=39.54%, 4=26.03%, 10=0.67% 00:16:36.018 cpu : usr=24.92%, sys=51.98%, ctx=104, majf=0, minf=729 00:16:36.018 IO depths : 1=0.2%, 2=1.8%, 4=5.5%, 8=12.1%, 16=25.6%, 32=53.1%, >=64=1.7% 00:16:36.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.018 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:36.018 issued rwts: total=118513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.018 00:16:36.018 Run status group 0 (all jobs): 00:16:36.018 READ: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=463MiB (485MB), run=5001-5001msec 00:16:36.585 ----------------------------------------------------- 00:16:36.585 Suppressions used: 00:16:36.585 count bytes template 00:16:36.585 1 11 /usr/src/fio/parse.c 00:16:36.585 1 8 libtcmalloc_minimal.so 00:16:36.585 1 904 libcrypto.so 00:16:36.585 ----------------------------------------------------- 00:16:36.585 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:36.585 11:24:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.585 { 00:16:36.585 "subsystems": [ 00:16:36.585 { 00:16:36.585 "subsystem": "bdev", 00:16:36.585 "config": [ 00:16:36.585 { 00:16:36.585 "params": { 00:16:36.585 "io_mechanism": "libaio", 00:16:36.585 "conserve_cpu": true, 00:16:36.585 "filename": "/dev/nvme0n1", 00:16:36.585 "name": "xnvme_bdev" 00:16:36.585 }, 00:16:36.585 "method": "bdev_xnvme_create" 00:16:36.585 }, 00:16:36.585 { 00:16:36.585 "method": "bdev_wait_for_examine" 00:16:36.585 } 00:16:36.585 ] 00:16:36.585 } 00:16:36.585 ] 00:16:36.585 } 00:16:36.843 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:36.843 fio-3.35 00:16:36.843 Starting 1 thread 00:16:43.406 00:16:43.406 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71623: Tue Dec 10 11:25:04 2024 00:16:43.406 write: IOPS=24.3k, BW=94.9MiB/s (99.5MB/s)(474MiB/5001msec); 0 zone resets 00:16:43.406 slat (usec): min=5, max=1257, avg=36.80, stdev=29.30 00:16:43.406 clat (usec): min=73, max=5937, avg=1451.33, stdev=790.63 00:16:43.406 lat (usec): min=99, max=6026, avg=1488.12, stdev=792.87 00:16:43.406 clat percentiles (usec): 00:16:43.406 | 1.00th=[ 243], 5.00th=[ 359], 10.00th=[ 482], 20.00th=[ 709], 00:16:43.406 | 30.00th=[ 922], 40.00th=[ 1123], 50.00th=[ 1336], 60.00th=[ 1582], 00:16:43.406 | 70.00th=[ 1860], 80.00th=[ 2180], 90.00th=[ 2540], 95.00th=[ 2802], 00:16:43.406 | 99.00th=[ 3556], 99.50th=[ 3884], 99.90th=[ 4490], 99.95th=[ 4686], 00:16:43.406 | 99.99th=[ 5211] 00:16:43.406 bw ( KiB/s): min=87608, max=103137, per=97.93%, avg=95125.56, stdev=6821.73, samples=9 00:16:43.406 iops : min=21902, max=25784, avg=23781.33, stdev=1705.41, samples=9 00:16:43.406 lat (usec) : 100=0.01%, 250=1.16%, 500=9.59%, 750=11.13%, 1000=11.98% 00:16:43.406 lat (msec) : 2=40.78%, 4=24.96%, 10=0.38% 00:16:43.406 cpu : usr=24.18%, sys=52.88%, ctx=160, majf=0, minf=765 00:16:43.406 IO depths : 1=0.2%, 2=1.7%, 4=5.4%, 8=12.1%, 16=25.7%, 32=53.2%, >=64=1.7% 00:16:43.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.406 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:43.406 issued rwts: total=0,121450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:43.406 00:16:43.406 Run status group 0 (all jobs): 00:16:43.406 WRITE: bw=94.9MiB/s (99.5MB/s), 94.9MiB/s-94.9MiB/s (99.5MB/s-99.5MB/s), io=474MiB (497MB), run=5001-5001msec 00:16:43.973 ----------------------------------------------------- 00:16:43.973 Suppressions used: 00:16:43.973 count bytes template 00:16:43.973 1 11 /usr/src/fio/parse.c 00:16:43.973 1 8 libtcmalloc_minimal.so 00:16:43.973 1 904 libcrypto.so 00:16:43.973 ----------------------------------------------------- 00:16:43.973 00:16:43.973 00:16:43.973 real 0m14.603s 00:16:43.973 user 0m6.115s 00:16:43.973 sys 0m5.846s 00:16:43.973 11:25:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.973 ************************************ 00:16:43.973 END TEST xnvme_fio_plugin 00:16:43.973 ************************************ 00:16:43.973 11:25:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:43.973 11:25:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:43.973 11:25:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.974 11:25:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.974 11:25:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.974 ************************************ 00:16:43.974 START TEST xnvme_rpc 00:16:43.974 ************************************ 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71709 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71709 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71709 ']' 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.974 11:25:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.232 [2024-12-10 11:25:06.141188] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:44.232 [2024-12-10 11:25:06.141351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:16:44.232 [2024-12-10 11:25:06.315287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.490 [2024-12-10 11:25:06.443509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 xnvme_bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71709 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71709 ']' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71709 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71709 00:16:45.425 killing process with pid 71709 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71709' 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71709 00:16:45.425 11:25:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71709 00:16:47.957 00:16:47.957 real 0m3.607s 00:16:47.957 user 0m3.901s 00:16:47.957 sys 0m0.443s 00:16:47.957 11:25:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.957 11:25:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.957 ************************************ 00:16:47.957 END TEST xnvme_rpc 00:16:47.957 ************************************ 00:16:47.957 11:25:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:47.957 11:25:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.957 11:25:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.957 11:25:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:47.957 ************************************ 00:16:47.957 START TEST xnvme_bdevperf 00:16:47.957 ************************************ 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:47.957 11:25:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:47.957 { 00:16:47.957 "subsystems": [ 00:16:47.957 { 00:16:47.957 "subsystem": "bdev", 00:16:47.957 "config": [ 00:16:47.957 { 00:16:47.957 "params": { 00:16:47.957 "io_mechanism": "io_uring", 00:16:47.957 "conserve_cpu": false, 00:16:47.957 "filename": "/dev/nvme0n1", 00:16:47.957 "name": "xnvme_bdev" 00:16:47.957 }, 00:16:47.957 "method": "bdev_xnvme_create" 00:16:47.957 }, 00:16:47.957 { 00:16:47.957 "method": "bdev_wait_for_examine" 00:16:47.957 } 00:16:47.957 ] 00:16:47.957 } 00:16:47.958 ] 00:16:47.958 } 00:16:47.958 [2024-12-10 11:25:09.776538] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:47.958 [2024-12-10 11:25:09.776719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71789 ] 00:16:47.958 [2024-12-10 11:25:09.956459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.958 [2024-12-10 11:25:10.084052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.524 Running I/O for 5 seconds... 00:16:50.393 47830.00 IOPS, 186.84 MiB/s [2024-12-10T11:25:13.494Z] 47499.00 IOPS, 185.54 MiB/s [2024-12-10T11:25:14.430Z] 47637.33 IOPS, 186.08 MiB/s [2024-12-10T11:25:15.827Z] 47824.00 IOPS, 186.81 MiB/s 00:16:53.660 Latency(us) 00:16:53.660 [2024-12-10T11:25:15.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.660 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:53.660 xnvme_bdev : 5.00 47705.01 186.35 0.00 0.00 1337.03 307.20 5928.03 00:16:53.660 [2024-12-10T11:25:15.827Z] =================================================================================================================== 00:16:53.660 [2024-12-10T11:25:15.827Z] Total : 47705.01 186.35 0.00 0.00 1337.03 307.20 5928.03 00:16:54.595 11:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:54.595 11:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:54.595 11:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:54.595 11:25:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:54.595 11:25:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:54.595 { 00:16:54.595 "subsystems": [ 00:16:54.595 { 00:16:54.595 "subsystem": "bdev", 00:16:54.595 "config": [ 00:16:54.595 { 00:16:54.595 "params": { 00:16:54.595 "io_mechanism": "io_uring", 00:16:54.595 "conserve_cpu": false, 00:16:54.595 "filename": "/dev/nvme0n1", 00:16:54.595 "name": "xnvme_bdev" 00:16:54.595 }, 00:16:54.595 "method": "bdev_xnvme_create" 00:16:54.595 }, 00:16:54.595 { 00:16:54.595 "method": "bdev_wait_for_examine" 00:16:54.595 } 00:16:54.595 ] 00:16:54.595 } 00:16:54.595 ] 00:16:54.595 } 00:16:54.595 [2024-12-10 11:25:16.537877] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:16:54.595 [2024-12-10 11:25:16.538024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71864 ] 00:16:54.595 [2024-12-10 11:25:16.737755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.853 [2024-12-10 11:25:16.861460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.111 Running I/O for 5 seconds... 00:16:57.420 47936.00 IOPS, 187.25 MiB/s [2024-12-10T11:25:20.522Z] 46752.00 IOPS, 182.62 MiB/s [2024-12-10T11:25:21.467Z] 45845.33 IOPS, 179.08 MiB/s [2024-12-10T11:25:22.402Z] 46464.00 IOPS, 181.50 MiB/s 00:17:00.235 Latency(us) 00:17:00.235 [2024-12-10T11:25:22.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.235 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:00.235 xnvme_bdev : 5.00 47124.06 184.08 0.00 0.00 1353.38 822.92 8996.31 00:17:00.235 [2024-12-10T11:25:22.402Z] =================================================================================================================== 00:17:00.235 [2024-12-10T11:25:22.402Z] Total : 47124.06 184.08 0.00 0.00 1353.38 822.92 8996.31 00:17:01.171 00:17:01.171 real 0m13.549s 00:17:01.171 user 0m7.402s 00:17:01.171 sys 0m5.947s 00:17:01.171 11:25:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.171 11:25:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:01.171 ************************************ 00:17:01.171 END TEST xnvme_bdevperf 00:17:01.171 ************************************ 00:17:01.171 11:25:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:01.171 11:25:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:01.171 11:25:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.171 11:25:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:01.171 ************************************ 00:17:01.171 START TEST xnvme_fio_plugin 00:17:01.171 ************************************ 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:01.171 11:25:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:01.430 { 00:17:01.430 "subsystems": [ 00:17:01.430 { 00:17:01.430 "subsystem": "bdev", 00:17:01.430 "config": [ 00:17:01.430 { 00:17:01.430 "params": { 00:17:01.430 "io_mechanism": "io_uring", 00:17:01.430 "conserve_cpu": false, 00:17:01.430 "filename": "/dev/nvme0n1", 00:17:01.430 "name": "xnvme_bdev" 00:17:01.430 }, 00:17:01.430 "method": "bdev_xnvme_create" 00:17:01.430 }, 00:17:01.430 { 00:17:01.430 "method": "bdev_wait_for_examine" 00:17:01.430 } 00:17:01.430 ] 00:17:01.430 } 00:17:01.430 ] 00:17:01.430 } 00:17:01.430 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:01.430 fio-3.35 00:17:01.430 Starting 1 thread 00:17:07.987 00:17:07.987 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71989: Tue Dec 10 11:25:29 2024 00:17:07.987 read: IOPS=49.2k, BW=192MiB/s (201MB/s)(961MiB/5001msec) 00:17:07.987 slat (usec): min=2, max=122, avg= 4.19, stdev= 1.93 00:17:07.987 clat (usec): min=168, max=7227, avg=1136.68, stdev=250.54 00:17:07.987 lat (usec): min=171, max=7231, avg=1140.88, stdev=251.18 00:17:07.987 clat percentiles (usec): 00:17:07.987 | 1.00th=[ 816], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 988], 00:17:07.987 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1139], 00:17:07.987 | 70.00th=[ 1172], 80.00th=[ 1237], 90.00th=[ 1336], 95.00th=[ 1500], 00:17:07.987 | 99.00th=[ 2089], 99.50th=[ 2376], 99.90th=[ 3720], 99.95th=[ 4178], 00:17:07.987 | 99.99th=[ 4948] 00:17:07.987 bw ( KiB/s): min=179168, max=223744, per=100.00%, avg=198443.56, stdev=13364.47, samples=9 00:17:07.987 iops : min=44792, max=55936, avg=49610.89, stdev=3341.12, samples=9 00:17:07.987 lat (usec) : 250=0.01%, 500=0.10%, 750=0.37%, 1000=22.88% 00:17:07.987 lat (msec) : 2=75.44%, 4=1.13%, 10=0.07% 00:17:07.987 cpu : usr=40.00%, sys=59.02%, ctx=17, majf=0, minf=762 00:17:07.987 IO depths : 1=1.4%, 2=2.8%, 4=5.8%, 8=12.3%, 16=25.1%, 32=50.9%, >=64=1.6% 00:17:07.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.987 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:07.987 issued rwts: total=245978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.987 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:07.987 00:17:07.987 Run status group 0 (all jobs): 00:17:07.987 READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=961MiB (1008MB), run=5001-5001msec 00:17:08.554 ----------------------------------------------------- 00:17:08.554 Suppressions used: 00:17:08.554 count bytes template 00:17:08.554 1 11 /usr/src/fio/parse.c 00:17:08.554 1 8 libtcmalloc_minimal.so 00:17:08.554 1 904 libcrypto.so 00:17:08.554 ----------------------------------------------------- 00:17:08.554 00:17:08.554 11:25:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:08.554 11:25:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:08.555 11:25:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:08.555 { 00:17:08.555 "subsystems": [ 00:17:08.555 { 00:17:08.555 "subsystem": "bdev", 00:17:08.555 "config": [ 00:17:08.555 { 00:17:08.555 "params": { 00:17:08.555 "io_mechanism": "io_uring", 00:17:08.555 "conserve_cpu": false, 00:17:08.555 "filename": "/dev/nvme0n1", 00:17:08.555 "name": "xnvme_bdev" 00:17:08.555 }, 00:17:08.555 "method": "bdev_xnvme_create" 00:17:08.555 }, 00:17:08.555 { 00:17:08.555 "method": "bdev_wait_for_examine" 00:17:08.555 } 00:17:08.555 ] 00:17:08.555 } 00:17:08.555 ] 00:17:08.555 } 00:17:08.814 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:08.814 fio-3.35 00:17:08.814 Starting 1 thread 00:17:15.381 00:17:15.381 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72081: Tue Dec 10 11:25:36 2024 00:17:15.381 write: IOPS=46.1k, BW=180MiB/s (189MB/s)(900MiB/5001msec); 0 zone resets 00:17:15.381 slat (usec): min=2, max=151, avg= 4.68, stdev= 2.28 00:17:15.381 clat (usec): min=805, max=7225, avg=1200.92, stdev=189.30 00:17:15.381 lat (usec): min=809, max=7234, avg=1205.60, stdev=189.93 00:17:15.381 clat percentiles (usec): 00:17:15.381 | 1.00th=[ 938], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1074], 00:17:15.381 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1221], 00:17:15.381 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1401], 95.00th=[ 1516], 00:17:15.381 | 99.00th=[ 1729], 99.50th=[ 1795], 99.90th=[ 1926], 99.95th=[ 2114], 00:17:15.381 | 99.99th=[ 7111] 00:17:15.381 bw ( KiB/s): min=179712, max=195584, per=100.00%, avg=185230.22, stdev=4655.95, samples=9 00:17:15.381 iops : min=44928, max=48896, avg=46307.56, stdev=1163.99, samples=9 00:17:15.381 lat (usec) : 1000=6.58% 00:17:15.381 lat (msec) : 2=93.35%, 4=0.04%, 10=0.03% 00:17:15.381 cpu : usr=42.16%, sys=56.82%, ctx=22, majf=0, minf=763 00:17:15.381 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:15.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.381 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:15.381 issued rwts: total=0,230464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:15.381 00:17:15.381 Run status group 0 (all jobs): 00:17:15.381 WRITE: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=900MiB (944MB), run=5001-5001msec 00:17:15.640 ----------------------------------------------------- 00:17:15.640 Suppressions used: 00:17:15.640 count bytes template 00:17:15.640 1 11 /usr/src/fio/parse.c 00:17:15.640 1 8 libtcmalloc_minimal.so 00:17:15.640 1 904 libcrypto.so 00:17:15.640 ----------------------------------------------------- 00:17:15.640 00:17:15.640 ************************************ 00:17:15.640 END TEST xnvme_fio_plugin 00:17:15.640 00:17:15.640 real 0m14.518s 00:17:15.640 user 0m7.765s 00:17:15.640 sys 0m6.378s 00:17:15.640 11:25:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.640 11:25:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 ************************************ 00:17:15.899 11:25:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:15.899 11:25:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:15.899 11:25:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:15.899 11:25:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:15.899 11:25:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.899 11:25:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.899 11:25:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.899 ************************************ 00:17:15.899 START TEST xnvme_rpc 00:17:15.899 ************************************ 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72166 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72166 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72166 ']' 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.899 11:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.899 [2024-12-10 11:25:37.982813] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:15.899 [2024-12-10 11:25:37.982987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72166 ] 00:17:16.158 [2024-12-10 11:25:38.168879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.158 [2024-12-10 11:25:38.294966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 xnvme_bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72166 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72166 ']' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72166 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.128 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72166 00:17:17.387 killing process with pid 72166 00:17:17.387 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.387 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.387 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72166' 00:17:17.387 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72166 00:17:17.387 11:25:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72166 00:17:19.290 ************************************ 00:17:19.290 END TEST xnvme_rpc 00:17:19.290 ************************************ 00:17:19.290 00:17:19.290 real 0m3.493s 00:17:19.290 user 0m3.829s 00:17:19.290 sys 0m0.449s 00:17:19.290 11:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.290 11:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.290 11:25:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:19.290 11:25:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.290 11:25:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.290 11:25:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.290 ************************************ 00:17:19.290 START TEST xnvme_bdevperf 00:17:19.290 ************************************ 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:19.290 11:25:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:19.290 { 00:17:19.290 "subsystems": [ 00:17:19.290 { 00:17:19.290 "subsystem": "bdev", 00:17:19.290 "config": [ 00:17:19.290 { 00:17:19.290 "params": { 00:17:19.290 "io_mechanism": "io_uring", 00:17:19.290 "conserve_cpu": true, 00:17:19.290 "filename": "/dev/nvme0n1", 00:17:19.290 "name": "xnvme_bdev" 00:17:19.290 }, 00:17:19.290 "method": "bdev_xnvme_create" 00:17:19.290 }, 00:17:19.290 { 00:17:19.290 "method": "bdev_wait_for_examine" 00:17:19.290 } 00:17:19.290 ] 00:17:19.290 } 00:17:19.290 ] 00:17:19.290 } 00:17:19.550 [2024-12-10 11:25:41.485478] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:19.550 [2024-12-10 11:25:41.485857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72247 ] 00:17:19.550 [2024-12-10 11:25:41.658496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.808 [2024-12-10 11:25:41.760469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.067 Running I/O for 5 seconds... 00:17:21.937 51968.00 IOPS, 203.00 MiB/s [2024-12-10T11:25:45.479Z] 53888.00 IOPS, 210.50 MiB/s [2024-12-10T11:25:46.414Z] 53034.67 IOPS, 207.17 MiB/s [2024-12-10T11:25:47.351Z] 53376.00 IOPS, 208.50 MiB/s [2024-12-10T11:25:47.351Z] 52838.40 IOPS, 206.40 MiB/s 00:17:25.184 Latency(us) 00:17:25.184 [2024-12-10T11:25:47.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.184 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:25.184 xnvme_bdev : 5.01 52791.28 206.22 0.00 0.00 1208.41 763.35 5213.09 00:17:25.184 [2024-12-10T11:25:47.351Z] =================================================================================================================== 00:17:25.184 [2024-12-10T11:25:47.351Z] Total : 52791.28 206.22 0.00 0.00 1208.41 763.35 5213.09 00:17:26.123 11:25:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:26.123 11:25:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:26.123 11:25:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:26.123 11:25:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:26.123 11:25:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 { 00:17:26.123 "subsystems": [ 00:17:26.123 { 00:17:26.123 "subsystem": "bdev", 00:17:26.123 "config": [ 00:17:26.123 { 00:17:26.123 "params": { 00:17:26.123 "io_mechanism": "io_uring", 00:17:26.123 "conserve_cpu": true, 00:17:26.123 "filename": "/dev/nvme0n1", 00:17:26.123 "name": "xnvme_bdev" 00:17:26.123 }, 00:17:26.123 "method": "bdev_xnvme_create" 00:17:26.123 }, 00:17:26.123 { 00:17:26.123 "method": "bdev_wait_for_examine" 00:17:26.124 } 00:17:26.124 ] 00:17:26.124 } 00:17:26.124 ] 00:17:26.124 } 00:17:26.124 [2024-12-10 11:25:48.207786] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:26.124 [2024-12-10 11:25:48.207956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72317 ] 00:17:26.382 [2024-12-10 11:25:48.382529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.382 [2024-12-10 11:25:48.488688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.641 Running I/O for 5 seconds... 00:17:28.953 47808.00 IOPS, 186.75 MiB/s [2024-12-10T11:25:52.056Z] 47424.00 IOPS, 185.25 MiB/s [2024-12-10T11:25:52.992Z] 47082.67 IOPS, 183.92 MiB/s [2024-12-10T11:25:53.929Z] 46912.00 IOPS, 183.25 MiB/s [2024-12-10T11:25:53.929Z] 47270.40 IOPS, 184.65 MiB/s 00:17:31.762 Latency(us) 00:17:31.762 [2024-12-10T11:25:53.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.762 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:31.762 xnvme_bdev : 5.00 47247.04 184.56 0.00 0.00 1349.85 830.37 3842.79 00:17:31.762 [2024-12-10T11:25:53.929Z] =================================================================================================================== 00:17:31.762 [2024-12-10T11:25:53.929Z] Total : 47247.04 184.56 0.00 0.00 1349.85 830.37 3842.79 00:17:32.699 00:17:32.699 real 0m13.463s 00:17:32.699 user 0m9.757s 00:17:32.699 sys 0m3.203s 00:17:32.699 11:25:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.699 ************************************ 00:17:32.699 END TEST xnvme_bdevperf 00:17:32.699 ************************************ 00:17:32.699 11:25:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.958 11:25:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:32.958 11:25:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:32.958 11:25:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.958 11:25:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.958 ************************************ 00:17:32.958 START TEST xnvme_fio_plugin 00:17:32.958 ************************************ 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.958 11:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:32.958 { 00:17:32.958 "subsystems": [ 00:17:32.958 { 00:17:32.958 "subsystem": "bdev", 00:17:32.958 "config": [ 00:17:32.958 { 00:17:32.958 "params": { 00:17:32.958 "io_mechanism": "io_uring", 00:17:32.958 "conserve_cpu": true, 00:17:32.958 "filename": "/dev/nvme0n1", 00:17:32.958 "name": "xnvme_bdev" 00:17:32.958 }, 00:17:32.958 "method": "bdev_xnvme_create" 00:17:32.958 }, 00:17:32.958 { 00:17:32.958 "method": "bdev_wait_for_examine" 00:17:32.958 } 00:17:32.958 ] 00:17:32.958 } 00:17:32.958 ] 00:17:32.958 } 00:17:33.217 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:33.217 fio-3.35 00:17:33.217 Starting 1 thread 00:17:39.781 00:17:39.781 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72442: Tue Dec 10 11:26:00 2024 00:17:39.781 read: IOPS=49.4k, BW=193MiB/s (202MB/s)(964MiB/5001msec) 00:17:39.781 slat (nsec): min=2931, max=90890, avg=4194.71, stdev=1606.38 00:17:39.781 clat (usec): min=795, max=2459, avg=1129.36, stdev=148.71 00:17:39.781 lat (usec): min=798, max=2489, avg=1133.55, stdev=149.19 00:17:39.781 clat percentiles (usec): 00:17:39.781 | 1.00th=[ 898], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 1012], 00:17:39.781 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:17:39.781 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1287], 95.00th=[ 1401], 00:17:39.781 | 99.00th=[ 1663], 99.50th=[ 1745], 99.90th=[ 1893], 99.95th=[ 1958], 00:17:39.781 | 99.99th=[ 2212] 00:17:39.781 bw ( KiB/s): min=181760, max=210432, per=99.69%, avg=196835.56, stdev=9510.36, samples=9 00:17:39.781 iops : min=45440, max=52608, avg=49208.89, stdev=2377.59, samples=9 00:17:39.781 lat (usec) : 1000=16.96% 00:17:39.781 lat (msec) : 2=83.01%, 4=0.03% 00:17:39.781 cpu : usr=69.46%, sys=26.42%, ctx=50, majf=0, minf=762 00:17:39.781 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.781 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:39.781 issued rwts: total=246848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.781 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:39.781 00:17:39.781 Run status group 0 (all jobs): 00:17:39.781 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=964MiB (1011MB), run=5001-5001msec 00:17:40.348 ----------------------------------------------------- 00:17:40.348 Suppressions used: 00:17:40.348 count bytes template 00:17:40.348 1 11 /usr/src/fio/parse.c 00:17:40.348 1 8 libtcmalloc_minimal.so 00:17:40.348 1 904 libcrypto.so 00:17:40.348 ----------------------------------------------------- 00:17:40.348 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:40.348 11:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.348 { 00:17:40.348 "subsystems": [ 00:17:40.348 { 00:17:40.348 "subsystem": "bdev", 00:17:40.348 "config": [ 00:17:40.348 { 00:17:40.348 "params": { 00:17:40.348 "io_mechanism": "io_uring", 00:17:40.348 "conserve_cpu": true, 00:17:40.348 "filename": "/dev/nvme0n1", 00:17:40.348 "name": "xnvme_bdev" 00:17:40.348 }, 00:17:40.348 "method": "bdev_xnvme_create" 00:17:40.348 }, 00:17:40.348 { 00:17:40.348 "method": "bdev_wait_for_examine" 00:17:40.348 } 00:17:40.348 ] 00:17:40.348 } 00:17:40.348 ] 00:17:40.348 } 00:17:40.607 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:40.607 fio-3.35 00:17:40.607 Starting 1 thread 00:17:47.171 00:17:47.171 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72539: Tue Dec 10 11:26:08 2024 00:17:47.171 write: IOPS=49.4k, BW=193MiB/s (202MB/s)(965MiB/5001msec); 0 zone resets 00:17:47.171 slat (usec): min=3, max=112, avg= 4.29, stdev= 1.83 00:17:47.171 clat (usec): min=770, max=2590, avg=1125.72, stdev=166.00 00:17:47.171 lat (usec): min=774, max=2623, avg=1130.01, stdev=166.83 00:17:47.171 clat percentiles (usec): 00:17:47.171 | 1.00th=[ 881], 5.00th=[ 922], 10.00th=[ 955], 20.00th=[ 996], 00:17:47.171 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:17:47.171 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1303], 95.00th=[ 1434], 00:17:47.171 | 99.00th=[ 1762], 99.50th=[ 1844], 99.90th=[ 2024], 99.95th=[ 2147], 00:17:47.171 | 99.99th=[ 2376] 00:17:47.171 bw ( KiB/s): min=185344, max=215552, per=100.00%, avg=198200.89, stdev=9306.03, samples=9 00:17:47.171 iops : min=46336, max=53888, avg=49550.22, stdev=2326.51, samples=9 00:17:47.171 lat (usec) : 1000=20.21% 00:17:47.171 lat (msec) : 2=79.67%, 4=0.12% 00:17:47.171 cpu : usr=68.76%, sys=27.42%, ctx=14, majf=0, minf=763 00:17:47.171 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.171 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:47.171 issued rwts: total=0,246976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:47.171 00:17:47.171 Run status group 0 (all jobs): 00:17:47.171 WRITE: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=965MiB (1012MB), run=5001-5001msec 00:17:47.430 ----------------------------------------------------- 00:17:47.430 Suppressions used: 00:17:47.430 count bytes template 00:17:47.430 1 11 /usr/src/fio/parse.c 00:17:47.430 1 8 libtcmalloc_minimal.so 00:17:47.430 1 904 libcrypto.so 00:17:47.430 ----------------------------------------------------- 00:17:47.430 00:17:47.430 ************************************ 00:17:47.430 END TEST xnvme_fio_plugin 00:17:47.430 ************************************ 00:17:47.430 00:17:47.430 real 0m14.627s 00:17:47.430 user 0m10.623s 00:17:47.430 sys 0m3.312s 00:17:47.430 11:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.430 11:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:47.430 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:47.430 11:26:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.430 11:26:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.430 11:26:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 ************************************ 00:17:47.430 START TEST xnvme_rpc 00:17:47.430 ************************************ 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72627 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72627 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72627 ']' 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.430 11:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.689 [2024-12-10 11:26:09.714784] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:47.689 [2024-12-10 11:26:09.715205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 00:17:47.948 [2024-12-10 11:26:09.903603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.948 [2024-12-10 11:26:10.039461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 xnvme_bdev 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72627 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72627 ']' 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72627 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.885 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72627 00:17:49.143 killing process with pid 72627 00:17:49.143 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.143 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.143 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72627' 00:17:49.143 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72627 00:17:49.143 11:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72627 00:17:51.046 00:17:51.046 real 0m3.562s 00:17:51.046 user 0m3.880s 00:17:51.046 sys 0m0.436s 00:17:51.046 ************************************ 00:17:51.046 END TEST xnvme_rpc 00:17:51.046 ************************************ 00:17:51.046 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.046 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.046 11:26:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:51.046 11:26:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.046 11:26:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.046 11:26:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:51.046 ************************************ 00:17:51.046 START TEST xnvme_bdevperf 00:17:51.046 ************************************ 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:51.046 11:26:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 { 00:17:51.305 "subsystems": [ 00:17:51.305 { 00:17:51.305 "subsystem": "bdev", 00:17:51.305 "config": [ 00:17:51.305 { 00:17:51.305 "params": { 00:17:51.305 "io_mechanism": "io_uring_cmd", 00:17:51.305 "conserve_cpu": false, 00:17:51.305 "filename": "/dev/ng0n1", 00:17:51.305 "name": "xnvme_bdev" 00:17:51.305 }, 00:17:51.305 "method": "bdev_xnvme_create" 00:17:51.305 }, 00:17:51.305 { 00:17:51.305 "method": "bdev_wait_for_examine" 00:17:51.305 } 00:17:51.305 ] 00:17:51.305 } 00:17:51.306 ] 00:17:51.306 } 00:17:51.306 [2024-12-10 11:26:13.310417] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:51.306 [2024-12-10 11:26:13.310866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72708 ] 00:17:51.565 [2024-12-10 11:26:13.495368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.565 [2024-12-10 11:26:13.598391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.823 Running I/O for 5 seconds... 00:17:54.184 53952.00 IOPS, 210.75 MiB/s [2024-12-10T11:26:16.917Z] 52672.00 IOPS, 205.75 MiB/s [2024-12-10T11:26:18.292Z] 52224.00 IOPS, 204.00 MiB/s [2024-12-10T11:26:19.228Z] 52048.00 IOPS, 203.31 MiB/s [2024-12-10T11:26:19.228Z] 51387.00 IOPS, 200.73 MiB/s 00:17:57.061 Latency(us) 00:17:57.061 [2024-12-10T11:26:19.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.061 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:57.061 xnvme_bdev : 5.00 51351.60 200.59 0.00 0.00 1242.12 778.24 7923.90 00:17:57.061 [2024-12-10T11:26:19.228Z] =================================================================================================================== 00:17:57.061 [2024-12-10T11:26:19.228Z] Total : 51351.60 200.59 0.00 0.00 1242.12 778.24 7923.90 00:17:57.997 11:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:57.997 11:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:57.997 11:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:57.997 11:26:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:57.997 11:26:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:57.997 { 00:17:57.997 "subsystems": [ 00:17:57.997 { 00:17:57.997 "subsystem": "bdev", 00:17:57.997 "config": [ 00:17:57.997 { 00:17:57.997 "params": { 00:17:57.997 "io_mechanism": "io_uring_cmd", 00:17:57.997 "conserve_cpu": false, 00:17:57.997 "filename": "/dev/ng0n1", 00:17:57.997 "name": "xnvme_bdev" 00:17:57.997 }, 00:17:57.997 "method": "bdev_xnvme_create" 00:17:57.997 }, 00:17:57.997 { 00:17:57.997 "method": "bdev_wait_for_examine" 00:17:57.997 } 00:17:57.997 ] 00:17:57.997 } 00:17:57.997 ] 00:17:57.997 } 00:17:57.997 [2024-12-10 11:26:20.042328] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:17:57.997 [2024-12-10 11:26:20.042526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72784 ] 00:17:58.256 [2024-12-10 11:26:20.230264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.256 [2024-12-10 11:26:20.356864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.515 Running I/O for 5 seconds... 00:18:00.826 47232.00 IOPS, 184.50 MiB/s [2024-12-10T11:26:23.960Z] 48640.00 IOPS, 190.00 MiB/s [2024-12-10T11:26:24.895Z] 48362.67 IOPS, 188.92 MiB/s [2024-12-10T11:26:25.830Z] 48096.00 IOPS, 187.88 MiB/s 00:18:03.663 Latency(us) 00:18:03.663 [2024-12-10T11:26:25.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.663 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:03.663 xnvme_bdev : 5.00 48589.02 189.80 0.00 0.00 1312.63 848.99 3902.37 00:18:03.663 [2024-12-10T11:26:25.830Z] =================================================================================================================== 00:18:03.663 [2024-12-10T11:26:25.830Z] Total : 48589.02 189.80 0.00 0.00 1312.63 848.99 3902.37 00:18:04.599 11:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:04.599 11:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:04.599 11:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:04.599 11:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:04.599 11:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:04.858 { 00:18:04.858 "subsystems": [ 00:18:04.858 { 00:18:04.858 "subsystem": "bdev", 00:18:04.858 "config": [ 00:18:04.858 { 00:18:04.858 "params": { 00:18:04.858 "io_mechanism": "io_uring_cmd", 00:18:04.858 "conserve_cpu": false, 00:18:04.858 "filename": "/dev/ng0n1", 00:18:04.858 "name": "xnvme_bdev" 00:18:04.858 }, 00:18:04.858 "method": "bdev_xnvme_create" 00:18:04.858 }, 00:18:04.858 { 00:18:04.858 "method": "bdev_wait_for_examine" 00:18:04.858 } 00:18:04.858 ] 00:18:04.858 } 00:18:04.858 ] 00:18:04.858 } 00:18:04.858 [2024-12-10 11:26:26.846747] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:04.858 [2024-12-10 11:26:26.846916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72863 ] 00:18:05.116 [2024-12-10 11:26:27.030989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.116 [2024-12-10 11:26:27.134557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.375 Running I/O for 5 seconds... 00:18:07.688 67200.00 IOPS, 262.50 MiB/s [2024-12-10T11:26:30.792Z] 67392.00 IOPS, 263.25 MiB/s [2024-12-10T11:26:31.727Z] 68309.33 IOPS, 266.83 MiB/s [2024-12-10T11:26:32.664Z] 67392.00 IOPS, 263.25 MiB/s 00:18:10.497 Latency(us) 00:18:10.497 [2024-12-10T11:26:32.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.497 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:10.497 xnvme_bdev : 5.00 67456.48 263.50 0.00 0.00 944.61 487.80 2978.91 00:18:10.497 [2024-12-10T11:26:32.664Z] =================================================================================================================== 00:18:10.497 [2024-12-10T11:26:32.664Z] Total : 67456.48 263.50 0.00 0.00 944.61 487.80 2978.91 00:18:11.465 11:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:11.465 11:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:11.465 11:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:11.465 11:26:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:11.465 11:26:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.465 { 00:18:11.465 "subsystems": [ 00:18:11.465 { 00:18:11.465 "subsystem": "bdev", 00:18:11.465 "config": [ 00:18:11.465 { 00:18:11.465 "params": { 00:18:11.465 "io_mechanism": "io_uring_cmd", 00:18:11.465 "conserve_cpu": false, 00:18:11.465 "filename": "/dev/ng0n1", 00:18:11.465 "name": "xnvme_bdev" 00:18:11.465 }, 00:18:11.465 "method": "bdev_xnvme_create" 00:18:11.465 }, 00:18:11.465 { 00:18:11.465 "method": "bdev_wait_for_examine" 00:18:11.465 } 00:18:11.465 ] 00:18:11.465 } 00:18:11.465 ] 00:18:11.465 } 00:18:11.465 [2024-12-10 11:26:33.588144] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:11.465 [2024-12-10 11:26:33.588317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72942 ] 00:18:11.724 [2024-12-10 11:26:33.770219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.724 [2024-12-10 11:26:33.872978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.290 Running I/O for 5 seconds... 00:18:14.160 41896.00 IOPS, 163.66 MiB/s [2024-12-10T11:26:37.263Z] 27139.00 IOPS, 106.01 MiB/s [2024-12-10T11:26:38.197Z] 30694.00 IOPS, 119.90 MiB/s [2024-12-10T11:26:39.572Z] 33515.25 IOPS, 130.92 MiB/s 00:18:17.405 Latency(us) 00:18:17.405 [2024-12-10T11:26:39.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.405 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:17.405 xnvme_bdev : 5.00 35313.37 137.94 0.00 0.00 1807.55 70.75 95801.72 00:18:17.405 [2024-12-10T11:26:39.572Z] =================================================================================================================== 00:18:17.405 [2024-12-10T11:26:39.572Z] Total : 35313.37 137.94 0.00 0.00 1807.55 70.75 95801.72 00:18:18.341 00:18:18.341 real 0m27.081s 00:18:18.341 user 0m15.694s 00:18:18.341 sys 0m10.988s 00:18:18.341 ************************************ 00:18:18.341 END TEST xnvme_bdevperf 00:18:18.341 ************************************ 00:18:18.341 11:26:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.341 11:26:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.341 11:26:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:18.341 11:26:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.341 11:26:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.341 11:26:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:18.341 ************************************ 00:18:18.341 START TEST xnvme_fio_plugin 00:18:18.341 ************************************ 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.341 11:26:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.341 { 00:18:18.341 "subsystems": [ 00:18:18.341 { 00:18:18.341 "subsystem": "bdev", 00:18:18.341 "config": [ 00:18:18.341 { 00:18:18.341 "params": { 00:18:18.341 "io_mechanism": "io_uring_cmd", 00:18:18.341 "conserve_cpu": false, 00:18:18.341 "filename": "/dev/ng0n1", 00:18:18.341 "name": "xnvme_bdev" 00:18:18.341 }, 00:18:18.341 "method": "bdev_xnvme_create" 00:18:18.341 }, 00:18:18.341 { 00:18:18.341 "method": "bdev_wait_for_examine" 00:18:18.341 } 00:18:18.341 ] 00:18:18.342 } 00:18:18.342 ] 00:18:18.342 } 00:18:18.600 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:18.600 fio-3.35 00:18:18.600 Starting 1 thread 00:18:25.157 00:18:25.157 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73060: Tue Dec 10 11:26:46 2024 00:18:25.157 read: IOPS=53.3k, BW=208MiB/s (218MB/s)(1041MiB/5001msec) 00:18:25.157 slat (nsec): min=2897, max=96514, avg=3774.82, stdev=1635.98 00:18:25.157 clat (usec): min=762, max=2826, avg=1052.07, stdev=149.48 00:18:25.157 lat (usec): min=765, max=2851, avg=1055.84, stdev=150.26 00:18:25.157 clat percentiles (usec): 00:18:25.157 | 1.00th=[ 840], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 947], 00:18:25.157 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1057], 00:18:25.157 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1237], 00:18:25.157 | 99.00th=[ 1663], 99.50th=[ 1958], 99.90th=[ 2343], 99.95th=[ 2442], 00:18:25.157 | 99.99th=[ 2638] 00:18:25.157 bw ( KiB/s): min=187392, max=225280, per=99.23%, avg=211454.22, stdev=12887.13, samples=9 00:18:25.157 iops : min=46848, max=56320, avg=52864.00, stdev=3221.40, samples=9 00:18:25.157 lat (usec) : 1000=37.31% 00:18:25.157 lat (msec) : 2=62.26%, 4=0.43% 00:18:25.157 cpu : usr=44.56%, sys=54.60%, ctx=17, majf=0, minf=762 00:18:25.157 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:25.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.157 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:25.157 issued rwts: total=266432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.157 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:25.157 00:18:25.157 Run status group 0 (all jobs): 00:18:25.157 READ: bw=208MiB/s (218MB/s), 208MiB/s-208MiB/s (218MB/s-218MB/s), io=1041MiB (1091MB), run=5001-5001msec 00:18:25.416 ----------------------------------------------------- 00:18:25.416 Suppressions used: 00:18:25.416 count bytes template 00:18:25.416 1 11 /usr/src/fio/parse.c 00:18:25.416 1 8 libtcmalloc_minimal.so 00:18:25.416 1 904 libcrypto.so 00:18:25.416 ----------------------------------------------------- 00:18:25.416 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.416 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.675 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.675 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.675 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:25.675 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.675 11:26:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.675 { 00:18:25.675 "subsystems": [ 00:18:25.675 { 00:18:25.675 "subsystem": "bdev", 00:18:25.675 "config": [ 00:18:25.675 { 00:18:25.675 "params": { 00:18:25.675 "io_mechanism": "io_uring_cmd", 00:18:25.675 "conserve_cpu": false, 00:18:25.675 "filename": "/dev/ng0n1", 00:18:25.675 "name": "xnvme_bdev" 00:18:25.675 }, 00:18:25.675 "method": "bdev_xnvme_create" 00:18:25.675 }, 00:18:25.675 { 00:18:25.675 "method": "bdev_wait_for_examine" 00:18:25.675 } 00:18:25.675 ] 00:18:25.675 } 00:18:25.675 ] 00:18:25.675 } 00:18:25.675 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:25.675 fio-3.35 00:18:25.675 Starting 1 thread 00:18:32.263 00:18:32.263 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73151: Tue Dec 10 11:26:53 2024 00:18:32.263 write: IOPS=47.8k, BW=187MiB/s (196MB/s)(934MiB/5001msec); 0 zone resets 00:18:32.263 slat (usec): min=2, max=603, avg= 4.52, stdev= 2.50 00:18:32.263 clat (usec): min=778, max=3563, avg=1158.57, stdev=158.75 00:18:32.263 lat (usec): min=781, max=3571, avg=1163.09, stdev=159.30 00:18:32.263 clat percentiles (usec): 00:18:32.263 | 1.00th=[ 889], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1037], 00:18:32.263 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1172], 00:18:32.263 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1336], 95.00th=[ 1434], 00:18:32.263 | 99.00th=[ 1696], 99.50th=[ 1778], 99.90th=[ 2057], 99.95th=[ 2245], 00:18:32.263 | 99.99th=[ 3458] 00:18:32.263 bw ( KiB/s): min=182272, max=203264, per=99.65%, avg=190520.89, stdev=6427.82, samples=9 00:18:32.263 iops : min=45568, max=50816, avg=47630.22, stdev=1606.95, samples=9 00:18:32.263 lat (usec) : 1000=12.44% 00:18:32.263 lat (msec) : 2=87.42%, 4=0.13% 00:18:32.263 cpu : usr=44.66%, sys=54.28%, ctx=19, majf=0, minf=763 00:18:32.263 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:32.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.263 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:32.263 issued rwts: total=0,239040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.263 00:18:32.263 Run status group 0 (all jobs): 00:18:32.263 WRITE: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=934MiB (979MB), run=5001-5001msec 00:18:32.830 ----------------------------------------------------- 00:18:32.830 Suppressions used: 00:18:32.830 count bytes template 00:18:32.830 1 11 /usr/src/fio/parse.c 00:18:32.830 1 8 libtcmalloc_minimal.so 00:18:32.830 1 904 libcrypto.so 00:18:32.830 ----------------------------------------------------- 00:18:32.830 00:18:32.830 ************************************ 00:18:32.830 END TEST xnvme_fio_plugin 00:18:32.830 ************************************ 00:18:32.830 00:18:32.830 real 0m14.428s 00:18:32.830 user 0m8.062s 00:18:32.831 sys 0m6.017s 00:18:32.831 11:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.831 11:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.831 11:26:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:32.831 11:26:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:32.831 11:26:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:32.831 11:26:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:32.831 11:26:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:32.831 11:26:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.831 11:26:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.831 ************************************ 00:18:32.831 START TEST xnvme_rpc 00:18:32.831 ************************************ 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73231 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73231 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73231 ']' 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.831 11:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.831 [2024-12-10 11:26:54.944919] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:32.831 [2024-12-10 11:26:54.945259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73231 ] 00:18:33.090 [2024-12-10 11:26:55.131350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.349 [2024-12-10 11:26:55.257273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.915 xnvme_bdev 00:18:33.915 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73231 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73231 ']' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73231 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73231 00:18:34.174 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.174 killing process with pid 73231 00:18:34.175 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.175 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73231' 00:18:34.175 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73231 00:18:34.175 11:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73231 00:18:36.734 00:18:36.734 real 0m3.585s 00:18:36.734 user 0m3.920s 00:18:36.734 sys 0m0.445s 00:18:36.734 ************************************ 00:18:36.734 END TEST xnvme_rpc 00:18:36.734 ************************************ 00:18:36.734 11:26:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.734 11:26:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.734 11:26:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:36.734 11:26:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.734 11:26:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.734 11:26:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:36.734 ************************************ 00:18:36.734 START TEST xnvme_bdevperf 00:18:36.734 ************************************ 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:36.734 11:26:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.734 { 00:18:36.734 "subsystems": [ 00:18:36.734 { 00:18:36.734 "subsystem": "bdev", 00:18:36.734 "config": [ 00:18:36.734 { 00:18:36.734 "params": { 00:18:36.734 "io_mechanism": "io_uring_cmd", 00:18:36.734 "conserve_cpu": true, 00:18:36.734 "filename": "/dev/ng0n1", 00:18:36.734 "name": "xnvme_bdev" 00:18:36.734 }, 00:18:36.734 "method": "bdev_xnvme_create" 00:18:36.734 }, 00:18:36.734 { 00:18:36.734 "method": "bdev_wait_for_examine" 00:18:36.734 } 00:18:36.734 ] 00:18:36.734 } 00:18:36.734 ] 00:18:36.734 } 00:18:36.734 [2024-12-10 11:26:58.565079] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:36.735 [2024-12-10 11:26:58.565264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73315 ] 00:18:36.735 [2024-12-10 11:26:58.745098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.735 [2024-12-10 11:26:58.847541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.302 Running I/O for 5 seconds... 00:18:39.174 51646.00 IOPS, 201.74 MiB/s [2024-12-10T11:27:02.275Z] 52031.00 IOPS, 203.25 MiB/s [2024-12-10T11:27:03.208Z] 51199.33 IOPS, 200.00 MiB/s [2024-12-10T11:27:04.583Z] 51119.50 IOPS, 199.69 MiB/s [2024-12-10T11:27:04.583Z] 51238.00 IOPS, 200.15 MiB/s 00:18:42.416 Latency(us) 00:18:42.416 [2024-12-10T11:27:04.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.416 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:42.416 xnvme_bdev : 5.00 51224.88 200.10 0.00 0.00 1245.19 662.81 4647.10 00:18:42.416 [2024-12-10T11:27:04.583Z] =================================================================================================================== 00:18:42.416 [2024-12-10T11:27:04.583Z] Total : 51224.88 200.10 0.00 0.00 1245.19 662.81 4647.10 00:18:42.983 11:27:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:42.983 11:27:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:42.983 11:27:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:42.983 11:27:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:42.983 11:27:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 { 00:18:43.241 "subsystems": [ 00:18:43.241 { 00:18:43.241 "subsystem": "bdev", 00:18:43.241 "config": [ 00:18:43.241 { 00:18:43.241 "params": { 00:18:43.241 "io_mechanism": "io_uring_cmd", 00:18:43.241 "conserve_cpu": true, 00:18:43.241 "filename": "/dev/ng0n1", 00:18:43.241 "name": "xnvme_bdev" 00:18:43.241 }, 00:18:43.241 "method": "bdev_xnvme_create" 00:18:43.241 }, 00:18:43.241 { 00:18:43.241 "method": "bdev_wait_for_examine" 00:18:43.241 } 00:18:43.241 ] 00:18:43.241 } 00:18:43.241 ] 00:18:43.241 } 00:18:43.241 [2024-12-10 11:27:05.254457] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:43.241 [2024-12-10 11:27:05.254772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73391 ] 00:18:43.499 [2024-12-10 11:27:05.440144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.499 [2024-12-10 11:27:05.544724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.757 Running I/O for 5 seconds... 00:18:45.698 45312.00 IOPS, 177.00 MiB/s [2024-12-10T11:27:09.239Z] 45984.00 IOPS, 179.62 MiB/s [2024-12-10T11:27:10.172Z] 46080.00 IOPS, 180.00 MiB/s [2024-12-10T11:27:11.106Z] 46496.00 IOPS, 181.62 MiB/s [2024-12-10T11:27:11.106Z] 46336.00 IOPS, 181.00 MiB/s 00:18:48.939 Latency(us) 00:18:48.939 [2024-12-10T11:27:11.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.939 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:48.939 xnvme_bdev : 5.00 46314.32 180.92 0.00 0.00 1376.92 800.58 6106.76 00:18:48.939 [2024-12-10T11:27:11.106Z] =================================================================================================================== 00:18:48.939 [2024-12-10T11:27:11.106Z] Total : 46314.32 180.92 0.00 0.00 1376.92 800.58 6106.76 00:18:49.911 11:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:49.911 11:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:49.911 11:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:49.911 11:27:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:49.911 11:27:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.911 { 00:18:49.911 "subsystems": [ 00:18:49.911 { 00:18:49.911 "subsystem": "bdev", 00:18:49.911 "config": [ 00:18:49.911 { 00:18:49.911 "params": { 00:18:49.911 "io_mechanism": "io_uring_cmd", 00:18:49.911 "conserve_cpu": true, 00:18:49.911 "filename": "/dev/ng0n1", 00:18:49.911 "name": "xnvme_bdev" 00:18:49.911 }, 00:18:49.911 "method": "bdev_xnvme_create" 00:18:49.911 }, 00:18:49.911 { 00:18:49.911 "method": "bdev_wait_for_examine" 00:18:49.911 } 00:18:49.911 ] 00:18:49.911 } 00:18:49.911 ] 00:18:49.911 } 00:18:49.911 [2024-12-10 11:27:11.999173] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:49.911 [2024-12-10 11:27:11.999529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:18:50.169 [2024-12-10 11:27:12.175528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.169 [2024-12-10 11:27:12.283953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.736 Running I/O for 5 seconds... 00:18:52.606 69952.00 IOPS, 273.25 MiB/s [2024-12-10T11:27:15.705Z] 68960.00 IOPS, 269.38 MiB/s [2024-12-10T11:27:16.637Z] 69674.67 IOPS, 272.17 MiB/s [2024-12-10T11:27:18.010Z] 69952.00 IOPS, 273.25 MiB/s [2024-12-10T11:27:18.010Z] 70041.60 IOPS, 273.60 MiB/s 00:18:55.843 Latency(us) 00:18:55.843 [2024-12-10T11:27:18.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.843 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:55.843 xnvme_bdev : 5.00 70021.18 273.52 0.00 0.00 910.00 525.03 3112.96 00:18:55.843 [2024-12-10T11:27:18.010Z] =================================================================================================================== 00:18:55.843 [2024-12-10T11:27:18.010Z] Total : 70021.18 273.52 0.00 0.00 910.00 525.03 3112.96 00:18:56.778 11:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.778 11:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:56.778 11:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:56.778 11:27:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:56.778 11:27:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.778 { 00:18:56.778 "subsystems": [ 00:18:56.778 { 00:18:56.778 "subsystem": "bdev", 00:18:56.778 "config": [ 00:18:56.778 { 00:18:56.778 "params": { 00:18:56.778 "io_mechanism": "io_uring_cmd", 00:18:56.778 "conserve_cpu": true, 00:18:56.778 "filename": "/dev/ng0n1", 00:18:56.778 "name": "xnvme_bdev" 00:18:56.778 }, 00:18:56.778 "method": "bdev_xnvme_create" 00:18:56.778 }, 00:18:56.778 { 00:18:56.778 "method": "bdev_wait_for_examine" 00:18:56.778 } 00:18:56.778 ] 00:18:56.778 } 00:18:56.778 ] 00:18:56.778 } 00:18:56.778 [2024-12-10 11:27:18.725533] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:56.778 [2024-12-10 11:27:18.725757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:18:57.036 [2024-12-10 11:27:18.953004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.036 [2024-12-10 11:27:19.072003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.294 Running I/O for 5 seconds... 00:18:59.603 44575.00 IOPS, 174.12 MiB/s [2024-12-10T11:27:22.704Z] 45498.50 IOPS, 177.73 MiB/s [2024-12-10T11:27:23.639Z] 45278.00 IOPS, 176.87 MiB/s [2024-12-10T11:27:24.573Z] 45085.25 IOPS, 176.11 MiB/s [2024-12-10T11:27:24.573Z] 44588.00 IOPS, 174.17 MiB/s 00:19:02.406 Latency(us) 00:19:02.406 [2024-12-10T11:27:24.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.406 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:02.406 xnvme_bdev : 5.01 44541.86 173.99 0.00 0.00 1432.06 97.28 10128.29 00:19:02.406 [2024-12-10T11:27:24.573Z] =================================================================================================================== 00:19:02.406 [2024-12-10T11:27:24.573Z] Total : 44541.86 173.99 0.00 0.00 1432.06 97.28 10128.29 00:19:03.342 00:19:03.342 real 0m26.942s 00:19:03.342 user 0m20.252s 00:19:03.342 sys 0m5.277s 00:19:03.342 11:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.342 11:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.342 ************************************ 00:19:03.342 END TEST xnvme_bdevperf 00:19:03.342 ************************************ 00:19:03.342 11:27:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:03.342 11:27:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.342 11:27:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.342 11:27:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:03.342 ************************************ 00:19:03.342 START TEST xnvme_fio_plugin 00:19:03.342 ************************************ 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.342 11:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.601 { 00:19:03.601 "subsystems": [ 00:19:03.601 { 00:19:03.601 "subsystem": "bdev", 00:19:03.601 "config": [ 00:19:03.601 { 00:19:03.601 "params": { 00:19:03.601 "io_mechanism": "io_uring_cmd", 00:19:03.601 "conserve_cpu": true, 00:19:03.601 "filename": "/dev/ng0n1", 00:19:03.601 "name": "xnvme_bdev" 00:19:03.601 }, 00:19:03.601 "method": "bdev_xnvme_create" 00:19:03.601 }, 00:19:03.601 { 00:19:03.601 "method": "bdev_wait_for_examine" 00:19:03.601 } 00:19:03.601 ] 00:19:03.601 } 00:19:03.601 ] 00:19:03.601 } 00:19:03.601 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:03.601 fio-3.35 00:19:03.601 Starting 1 thread 00:19:10.162 00:19:10.162 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73659: Tue Dec 10 11:27:31 2024 00:19:10.162 read: IOPS=53.0k, BW=207MiB/s (217MB/s)(1035MiB/5001msec) 00:19:10.162 slat (nsec): min=3127, max=79474, avg=3931.81, stdev=1384.50 00:19:10.162 clat (usec): min=747, max=1888, avg=1051.86, stdev=139.74 00:19:10.162 lat (usec): min=750, max=1912, avg=1055.80, stdev=140.30 00:19:10.162 clat percentiles (usec): 00:19:10.162 | 1.00th=[ 832], 5.00th=[ 873], 10.00th=[ 898], 20.00th=[ 938], 00:19:10.162 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1057], 00:19:10.162 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1221], 95.00th=[ 1303], 00:19:10.162 | 99.00th=[ 1565], 99.50th=[ 1631], 99.90th=[ 1729], 99.95th=[ 1778], 00:19:10.162 | 99.99th=[ 1844] 00:19:10.162 bw ( KiB/s): min=190464, max=224768, per=99.00%, avg=209806.22, stdev=11022.21, samples=9 00:19:10.162 iops : min=47616, max=56192, avg=52451.56, stdev=2755.55, samples=9 00:19:10.162 lat (usec) : 750=0.01%, 1000=38.93% 00:19:10.162 lat (msec) : 2=61.07% 00:19:10.162 cpu : usr=78.12%, sys=18.94%, ctx=15, majf=0, minf=762 00:19:10.162 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:10.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.162 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:10.162 issued rwts: total=264960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:10.162 00:19:10.162 Run status group 0 (all jobs): 00:19:10.162 READ: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=1035MiB (1085MB), run=5001-5001msec 00:19:10.729 ----------------------------------------------------- 00:19:10.729 Suppressions used: 00:19:10.729 count bytes template 00:19:10.729 1 11 /usr/src/fio/parse.c 00:19:10.729 1 8 libtcmalloc_minimal.so 00:19:10.729 1 904 libcrypto.so 00:19:10.729 ----------------------------------------------------- 00:19:10.729 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.729 11:27:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.729 { 00:19:10.729 "subsystems": [ 00:19:10.729 { 00:19:10.729 "subsystem": "bdev", 00:19:10.729 "config": [ 00:19:10.729 { 00:19:10.729 "params": { 00:19:10.729 "io_mechanism": "io_uring_cmd", 00:19:10.729 "conserve_cpu": true, 00:19:10.729 "filename": "/dev/ng0n1", 00:19:10.729 "name": "xnvme_bdev" 00:19:10.729 }, 00:19:10.729 "method": "bdev_xnvme_create" 00:19:10.729 }, 00:19:10.729 { 00:19:10.729 "method": "bdev_wait_for_examine" 00:19:10.729 } 00:19:10.729 ] 00:19:10.729 } 00:19:10.729 ] 00:19:10.729 } 00:19:10.988 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:10.988 fio-3.35 00:19:10.988 Starting 1 thread 00:19:17.551 00:19:17.551 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73750: Tue Dec 10 11:27:38 2024 00:19:17.551 write: IOPS=48.1k, BW=188MiB/s (197MB/s)(940MiB/5001msec); 0 zone resets 00:19:17.551 slat (nsec): min=2983, max=58742, avg=4374.31, stdev=1766.52 00:19:17.551 clat (usec): min=799, max=2449, avg=1157.73, stdev=170.11 00:19:17.551 lat (usec): min=803, max=2469, avg=1162.10, stdev=170.81 00:19:17.551 clat percentiles (usec): 00:19:17.551 | 1.00th=[ 906], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1029], 00:19:17.551 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:19:17.551 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1352], 95.00th=[ 1516], 00:19:17.551 | 99.00th=[ 1762], 99.50th=[ 1860], 99.90th=[ 2089], 99.95th=[ 2212], 00:19:17.551 | 99.99th=[ 2376] 00:19:17.551 bw ( KiB/s): min=182272, max=206336, per=100.00%, avg=193392.67, stdev=7483.55, samples=9 00:19:17.551 iops : min=45568, max=51584, avg=48348.11, stdev=1870.88, samples=9 00:19:17.551 lat (usec) : 1000=13.26% 00:19:17.551 lat (msec) : 2=86.54%, 4=0.20% 00:19:17.551 cpu : usr=78.96%, sys=18.14%, ctx=11, majf=0, minf=763 00:19:17.551 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:17.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.551 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:17.551 issued rwts: total=0,240567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.551 00:19:17.551 Run status group 0 (all jobs): 00:19:17.551 WRITE: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=940MiB (985MB), run=5001-5001msec 00:19:18.118 ----------------------------------------------------- 00:19:18.118 Suppressions used: 00:19:18.118 count bytes template 00:19:18.118 1 11 /usr/src/fio/parse.c 00:19:18.118 1 8 libtcmalloc_minimal.so 00:19:18.118 1 904 libcrypto.so 00:19:18.118 ----------------------------------------------------- 00:19:18.118 00:19:18.118 00:19:18.118 real 0m14.726s 00:19:18.118 user 0m11.679s 00:19:18.118 sys 0m2.485s 00:19:18.118 11:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.118 11:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 ************************************ 00:19:18.118 END TEST xnvme_fio_plugin 00:19:18.118 ************************************ 00:19:18.118 11:27:40 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73231 00:19:18.118 11:27:40 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73231 ']' 00:19:18.118 11:27:40 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73231 00:19:18.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73231) - No such process 00:19:18.118 Process with pid 73231 is not found 00:19:18.118 11:27:40 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73231 is not found' 00:19:18.118 00:19:18.118 real 3m45.196s 00:19:18.118 user 2m18.414s 00:19:18.118 sys 1m11.264s 00:19:18.118 11:27:40 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.118 11:27:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 ************************************ 00:19:18.118 END TEST nvme_xnvme 00:19:18.118 ************************************ 00:19:18.118 11:27:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:18.118 11:27:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.118 11:27:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.118 11:27:40 -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 ************************************ 00:19:18.118 START TEST blockdev_xnvme 00:19:18.118 ************************************ 00:19:18.118 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:18.377 * Looking for test storage... 00:19:18.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.377 11:27:40 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.377 --rc genhtml_branch_coverage=1 00:19:18.377 --rc genhtml_function_coverage=1 00:19:18.377 --rc genhtml_legend=1 00:19:18.377 --rc geninfo_all_blocks=1 00:19:18.377 --rc geninfo_unexecuted_blocks=1 00:19:18.377 00:19:18.377 ' 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.377 --rc genhtml_branch_coverage=1 00:19:18.377 --rc genhtml_function_coverage=1 00:19:18.377 --rc genhtml_legend=1 00:19:18.377 --rc geninfo_all_blocks=1 00:19:18.377 --rc geninfo_unexecuted_blocks=1 00:19:18.377 00:19:18.377 ' 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.377 --rc genhtml_branch_coverage=1 00:19:18.377 --rc genhtml_function_coverage=1 00:19:18.377 --rc genhtml_legend=1 00:19:18.377 --rc geninfo_all_blocks=1 00:19:18.377 --rc geninfo_unexecuted_blocks=1 00:19:18.377 00:19:18.377 ' 00:19:18.377 11:27:40 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.377 --rc genhtml_branch_coverage=1 00:19:18.378 --rc genhtml_function_coverage=1 00:19:18.378 --rc genhtml_legend=1 00:19:18.378 --rc geninfo_all_blocks=1 00:19:18.378 --rc geninfo_unexecuted_blocks=1 00:19:18.378 00:19:18.378 ' 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73892 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:18.378 11:27:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73892 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73892 ']' 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.378 11:27:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.636 [2024-12-10 11:27:40.566667] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:18.636 [2024-12-10 11:27:40.566828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73892 ] 00:19:18.636 [2024-12-10 11:27:40.756101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.895 [2024-12-10 11:27:40.861361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.830 11:27:41 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.830 11:27:41 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:19.830 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:19.830 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:19.830 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:19.830 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:19.830 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:20.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:20.654 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:20.654 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:20.654 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:20.654 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:20.654 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:20.654 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.655 nvme0n1 00:19:20.655 nvme0n2 00:19:20.655 nvme0n3 00:19:20.655 nvme1n1 00:19:20.655 nvme2n1 00:19:20.655 nvme3n1 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:20.655 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.655 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:20.914 11:27:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:20.914 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:20.915 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "91e151ca-dbe6-476a-9f5f-2c810fa970b4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91e151ca-dbe6-476a-9f5f-2c810fa970b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c98d167f-9161-41cb-9f88-19297b14c3d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c98d167f-9161-41cb-9f88-19297b14c3d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "2604089a-f640-42e9-9275-2d332c416ed5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2604089a-f640-42e9-9275-2d332c416ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ec777d9b-2725-485e-811b-d30cfedb6da7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ec777d9b-2725-485e-811b-d30cfedb6da7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e8e87fc0-6127-4fae-8952-9a427b536db3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e8e87fc0-6127-4fae-8952-9a427b536db3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cbb47305-97db-4d4c-bbd6-0f9aecc4afef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cbb47305-97db-4d4c-bbd6-0f9aecc4afef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:20.915 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:20.915 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:20.915 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:20.915 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73892 00:19:20.915 11:27:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73892 ']' 00:19:20.915 11:27:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73892 00:19:20.915 11:27:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:20.915 11:27:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.915 11:27:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73892 00:19:20.915 11:27:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.915 11:27:43 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.915 11:27:43 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73892' 00:19:20.915 killing process with pid 73892 00:19:20.915 11:27:43 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73892 00:19:20.915 11:27:43 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73892 00:19:23.449 11:27:45 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.449 11:27:45 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:23.449 11:27:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:23.449 11:27:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.449 11:27:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:23.449 ************************************ 00:19:23.449 START TEST bdev_hello_world 00:19:23.449 ************************************ 00:19:23.449 11:27:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:23.449 [2024-12-10 11:27:45.193076] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:23.449 [2024-12-10 11:27:45.193281] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74182 ] 00:19:23.449 [2024-12-10 11:27:45.376334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.449 [2024-12-10 11:27:45.480427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.016 [2024-12-10 11:27:45.880980] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:24.016 [2024-12-10 11:27:45.881037] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:24.016 [2024-12-10 11:27:45.881077] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:24.016 [2024-12-10 11:27:45.883307] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:24.016 [2024-12-10 11:27:45.883624] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:24.016 [2024-12-10 11:27:45.883669] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:24.016 [2024-12-10 11:27:45.883927] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:24.016 00:19:24.016 [2024-12-10 11:27:45.883965] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:24.952 00:19:24.952 real 0m1.775s 00:19:24.952 user 0m1.451s 00:19:24.952 sys 0m0.209s 00:19:24.952 11:27:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.952 11:27:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:24.952 ************************************ 00:19:24.952 END TEST bdev_hello_world 00:19:24.952 ************************************ 00:19:24.952 11:27:46 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:24.952 11:27:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.952 11:27:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.952 11:27:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.952 ************************************ 00:19:24.952 START TEST bdev_bounds 00:19:24.952 ************************************ 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74213 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:24.952 Process bdevio pid: 74213 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74213' 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74213 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74213 ']' 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.952 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:24.952 [2024-12-10 11:27:47.033145] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:24.952 [2024-12-10 11:27:47.033295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74213 ] 00:19:25.211 [2024-12-10 11:27:47.213555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.211 [2024-12-10 11:27:47.344163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.211 [2024-12-10 11:27:47.344329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.211 [2024-12-10 11:27:47.344330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.146 11:27:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.146 11:27:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:26.146 11:27:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:26.147 I/O targets: 00:19:26.147 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.147 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.147 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.147 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:26.147 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:26.147 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:26.147 00:19:26.147 00:19:26.147 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.147 http://cunit.sourceforge.net/ 00:19:26.147 00:19:26.147 00:19:26.147 Suite: bdevio tests on: nvme3n1 00:19:26.147 Test: blockdev write read block ...passed 00:19:26.147 Test: blockdev write zeroes read block ...passed 00:19:26.147 Test: blockdev write zeroes read no split ...passed 00:19:26.147 Test: blockdev write zeroes read split ...passed 00:19:26.147 Test: blockdev write zeroes read split partial ...passed 00:19:26.147 Test: blockdev reset ...passed 00:19:26.147 Test: blockdev write read 8 blocks ...passed 00:19:26.147 Test: blockdev write read size > 128k ...passed 00:19:26.147 Test: blockdev write read invalid size ...passed 00:19:26.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.147 Test: blockdev write read max offset ...passed 00:19:26.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.147 Test: blockdev writev readv 8 blocks ...passed 00:19:26.147 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.147 Test: blockdev writev readv block ...passed 00:19:26.147 Test: blockdev writev readv size > 128k ...passed 00:19:26.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.147 Test: blockdev comparev and writev ...passed 00:19:26.147 Test: blockdev nvme passthru rw ...passed 00:19:26.147 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.147 Test: blockdev nvme admin passthru ...passed 00:19:26.147 Test: blockdev copy ...passed 00:19:26.147 Suite: bdevio tests on: nvme2n1 00:19:26.147 Test: blockdev write read block ...passed 00:19:26.147 Test: blockdev write zeroes read block ...passed 00:19:26.147 Test: blockdev write zeroes read no split ...passed 00:19:26.147 Test: blockdev write zeroes read split ...passed 00:19:26.147 Test: blockdev write zeroes read split partial ...passed 00:19:26.147 Test: blockdev reset ...passed 00:19:26.147 Test: blockdev write read 8 blocks ...passed 00:19:26.147 Test: blockdev write read size > 128k ...passed 00:19:26.147 Test: blockdev write read invalid size ...passed 00:19:26.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.147 Test: blockdev write read max offset ...passed 00:19:26.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.147 Test: blockdev writev readv 8 blocks ...passed 00:19:26.147 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.147 Test: blockdev writev readv block ...passed 00:19:26.147 Test: blockdev writev readv size > 128k ...passed 00:19:26.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.147 Test: blockdev comparev and writev ...passed 00:19:26.147 Test: blockdev nvme passthru rw ...passed 00:19:26.147 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.147 Test: blockdev nvme admin passthru ...passed 00:19:26.147 Test: blockdev copy ...passed 00:19:26.147 Suite: bdevio tests on: nvme1n1 00:19:26.147 Test: blockdev write read block ...passed 00:19:26.147 Test: blockdev write zeroes read block ...passed 00:19:26.147 Test: blockdev write zeroes read no split ...passed 00:19:26.147 Test: blockdev write zeroes read split ...passed 00:19:26.147 Test: blockdev write zeroes read split partial ...passed 00:19:26.147 Test: blockdev reset ...passed 00:19:26.147 Test: blockdev write read 8 blocks ...passed 00:19:26.147 Test: blockdev write read size > 128k ...passed 00:19:26.147 Test: blockdev write read invalid size ...passed 00:19:26.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.147 Test: blockdev write read max offset ...passed 00:19:26.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.147 Test: blockdev writev readv 8 blocks ...passed 00:19:26.147 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.147 Test: blockdev writev readv block ...passed 00:19:26.147 Test: blockdev writev readv size > 128k ...passed 00:19:26.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.147 Test: blockdev comparev and writev ...passed 00:19:26.147 Test: blockdev nvme passthru rw ...passed 00:19:26.147 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.147 Test: blockdev nvme admin passthru ...passed 00:19:26.147 Test: blockdev copy ...passed 00:19:26.147 Suite: bdevio tests on: nvme0n3 00:19:26.147 Test: blockdev write read block ...passed 00:19:26.147 Test: blockdev write zeroes read block ...passed 00:19:26.147 Test: blockdev write zeroes read no split ...passed 00:19:26.147 Test: blockdev write zeroes read split ...passed 00:19:26.406 Test: blockdev write zeroes read split partial ...passed 00:19:26.406 Test: blockdev reset ...passed 00:19:26.406 Test: blockdev write read 8 blocks ...passed 00:19:26.406 Test: blockdev write read size > 128k ...passed 00:19:26.406 Test: blockdev write read invalid size ...passed 00:19:26.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.407 Test: blockdev write read max offset ...passed 00:19:26.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.407 Test: blockdev writev readv 8 blocks ...passed 00:19:26.407 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.407 Test: blockdev writev readv block ...passed 00:19:26.407 Test: blockdev writev readv size > 128k ...passed 00:19:26.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.407 Test: blockdev comparev and writev ...passed 00:19:26.407 Test: blockdev nvme passthru rw ...passed 00:19:26.407 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.407 Test: blockdev nvme admin passthru ...passed 00:19:26.407 Test: blockdev copy ...passed 00:19:26.407 Suite: bdevio tests on: nvme0n2 00:19:26.407 Test: blockdev write read block ...passed 00:19:26.407 Test: blockdev write zeroes read block ...passed 00:19:26.407 Test: blockdev write zeroes read no split ...passed 00:19:26.407 Test: blockdev write zeroes read split ...passed 00:19:26.407 Test: blockdev write zeroes read split partial ...passed 00:19:26.407 Test: blockdev reset ...passed 00:19:26.407 Test: blockdev write read 8 blocks ...passed 00:19:26.407 Test: blockdev write read size > 128k ...passed 00:19:26.407 Test: blockdev write read invalid size ...passed 00:19:26.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.407 Test: blockdev write read max offset ...passed 00:19:26.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.407 Test: blockdev writev readv 8 blocks ...passed 00:19:26.407 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.407 Test: blockdev writev readv block ...passed 00:19:26.407 Test: blockdev writev readv size > 128k ...passed 00:19:26.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.407 Test: blockdev comparev and writev ...passed 00:19:26.407 Test: blockdev nvme passthru rw ...passed 00:19:26.407 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.407 Test: blockdev nvme admin passthru ...passed 00:19:26.407 Test: blockdev copy ...passed 00:19:26.407 Suite: bdevio tests on: nvme0n1 00:19:26.407 Test: blockdev write read block ...passed 00:19:26.407 Test: blockdev write zeroes read block ...passed 00:19:26.407 Test: blockdev write zeroes read no split ...passed 00:19:26.407 Test: blockdev write zeroes read split ...passed 00:19:26.407 Test: blockdev write zeroes read split partial ...passed 00:19:26.407 Test: blockdev reset ...passed 00:19:26.407 Test: blockdev write read 8 blocks ...passed 00:19:26.407 Test: blockdev write read size > 128k ...passed 00:19:26.407 Test: blockdev write read invalid size ...passed 00:19:26.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.407 Test: blockdev write read max offset ...passed 00:19:26.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.407 Test: blockdev writev readv 8 blocks ...passed 00:19:26.407 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.407 Test: blockdev writev readv block ...passed 00:19:26.407 Test: blockdev writev readv size > 128k ...passed 00:19:26.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.407 Test: blockdev comparev and writev ...passed 00:19:26.407 Test: blockdev nvme passthru rw ...passed 00:19:26.407 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.407 Test: blockdev nvme admin passthru ...passed 00:19:26.407 Test: blockdev copy ...passed 00:19:26.407 00:19:26.407 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.407 suites 6 6 n/a 0 0 00:19:26.407 tests 138 138 138 0 0 00:19:26.407 asserts 780 780 780 0 n/a 00:19:26.407 00:19:26.407 Elapsed time = 1.220 seconds 00:19:26.407 0 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74213 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74213 ']' 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74213 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74213 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.407 killing process with pid 74213 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74213' 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74213 00:19:26.407 11:27:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74213 00:19:27.783 11:27:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:27.783 00:19:27.783 real 0m2.636s 00:19:27.783 user 0m6.608s 00:19:27.783 sys 0m0.349s 00:19:27.783 11:27:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.783 ************************************ 00:19:27.783 11:27:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:27.783 END TEST bdev_bounds 00:19:27.783 ************************************ 00:19:27.783 11:27:49 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:27.783 11:27:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:27.783 11:27:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.783 11:27:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.783 ************************************ 00:19:27.783 START TEST bdev_nbd 00:19:27.783 ************************************ 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74278 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74278 /var/tmp/spdk-nbd.sock 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74278 ']' 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.783 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:27.783 [2024-12-10 11:27:49.712806] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:27.783 [2024-12-10 11:27:49.712993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.783 [2024-12-10 11:27:49.891256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.041 [2024-12-10 11:27:49.994401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:28.606 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:28.607 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:28.607 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:28.607 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:28.607 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:28.607 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.864 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.180 1+0 records in 00:19:29.180 1+0 records out 00:19:29.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134427 s, 3.0 MB/s 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:29.180 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.181 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.181 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:29.181 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:29.181 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.438 1+0 records in 00:19:29.438 1+0 records out 00:19:29.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538341 s, 7.6 MB/s 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.438 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.696 1+0 records in 00:19:29.696 1+0 records out 00:19:29.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583212 s, 7.0 MB/s 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.696 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.954 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.955 1+0 records in 00:19:29.955 1+0 records out 00:19:29.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697954 s, 5.9 MB/s 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.955 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.213 1+0 records in 00:19:30.213 1+0 records out 00:19:30.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607805 s, 6.7 MB/s 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.213 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.472 1+0 records in 00:19:30.472 1+0 records out 00:19:30.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713961 s, 5.7 MB/s 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.472 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.731 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:30.731 { 00:19:30.731 "nbd_device": "/dev/nbd0", 00:19:30.731 "bdev_name": "nvme0n1" 00:19:30.731 }, 00:19:30.731 { 00:19:30.731 "nbd_device": "/dev/nbd1", 00:19:30.731 "bdev_name": "nvme0n2" 00:19:30.731 }, 00:19:30.731 { 00:19:30.731 "nbd_device": "/dev/nbd2", 00:19:30.731 "bdev_name": "nvme0n3" 00:19:30.731 }, 00:19:30.731 { 00:19:30.731 "nbd_device": "/dev/nbd3", 00:19:30.732 "bdev_name": "nvme1n1" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd4", 00:19:30.732 "bdev_name": "nvme2n1" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd5", 00:19:30.732 "bdev_name": "nvme3n1" 00:19:30.732 } 00:19:30.732 ]' 00:19:30.732 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:30.732 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd0", 00:19:30.732 "bdev_name": "nvme0n1" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd1", 00:19:30.732 "bdev_name": "nvme0n2" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd2", 00:19:30.732 "bdev_name": "nvme0n3" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd3", 00:19:30.732 "bdev_name": "nvme1n1" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd4", 00:19:30.732 "bdev_name": "nvme2n1" 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "nbd_device": "/dev/nbd5", 00:19:30.732 "bdev_name": "nvme3n1" 00:19:30.732 } 00:19:30.732 ]' 00:19:30.732 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.990 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.248 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:31.506 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.506 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.506 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.506 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.507 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.765 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.023 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:32.281 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:32.281 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:32.281 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:32.281 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.282 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.540 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:33.108 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:33.108 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:33.108 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:33.108 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.109 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:33.369 /dev/nbd0 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.369 1+0 records in 00:19:33.369 1+0 records out 00:19:33.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486219 s, 8.4 MB/s 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.369 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:33.626 /dev/nbd1 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.626 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.627 1+0 records in 00:19:33.627 1+0 records out 00:19:33.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565462 s, 7.2 MB/s 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.627 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:33.885 /dev/nbd10 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.885 11:27:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.885 1+0 records in 00:19:33.885 1+0 records out 00:19:33.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051344 s, 8.0 MB/s 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.885 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:34.144 /dev/nbd11 00:19:34.144 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.402 1+0 records in 00:19:34.402 1+0 records out 00:19:34.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064751 s, 6.3 MB/s 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.402 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:34.660 /dev/nbd12 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.660 1+0 records in 00:19:34.660 1+0 records out 00:19:34.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930786 s, 4.4 MB/s 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.660 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:34.918 /dev/nbd13 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.918 1+0 records in 00:19:34.918 1+0 records out 00:19:34.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0027534 s, 1.5 MB/s 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.918 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:35.176 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd0", 00:19:35.176 "bdev_name": "nvme0n1" 00:19:35.176 }, 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd1", 00:19:35.176 "bdev_name": "nvme0n2" 00:19:35.176 }, 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd10", 00:19:35.176 "bdev_name": "nvme0n3" 00:19:35.176 }, 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd11", 00:19:35.176 "bdev_name": "nvme1n1" 00:19:35.176 }, 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd12", 00:19:35.176 "bdev_name": "nvme2n1" 00:19:35.176 }, 00:19:35.176 { 00:19:35.176 "nbd_device": "/dev/nbd13", 00:19:35.177 "bdev_name": "nvme3n1" 00:19:35.177 } 00:19:35.177 ]' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd0", 00:19:35.177 "bdev_name": "nvme0n1" 00:19:35.177 }, 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd1", 00:19:35.177 "bdev_name": "nvme0n2" 00:19:35.177 }, 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd10", 00:19:35.177 "bdev_name": "nvme0n3" 00:19:35.177 }, 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd11", 00:19:35.177 "bdev_name": "nvme1n1" 00:19:35.177 }, 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd12", 00:19:35.177 "bdev_name": "nvme2n1" 00:19:35.177 }, 00:19:35.177 { 00:19:35.177 "nbd_device": "/dev/nbd13", 00:19:35.177 "bdev_name": "nvme3n1" 00:19:35.177 } 00:19:35.177 ]' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:35.177 /dev/nbd1 00:19:35.177 /dev/nbd10 00:19:35.177 /dev/nbd11 00:19:35.177 /dev/nbd12 00:19:35.177 /dev/nbd13' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:35.177 /dev/nbd1 00:19:35.177 /dev/nbd10 00:19:35.177 /dev/nbd11 00:19:35.177 /dev/nbd12 00:19:35.177 /dev/nbd13' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:35.177 256+0 records in 00:19:35.177 256+0 records out 00:19:35.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103472 s, 101 MB/s 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.177 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:35.436 256+0 records in 00:19:35.436 256+0 records out 00:19:35.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154799 s, 6.8 MB/s 00:19:35.436 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.436 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:35.436 256+0 records in 00:19:35.436 256+0 records out 00:19:35.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154136 s, 6.8 MB/s 00:19:35.436 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.436 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:35.694 256+0 records in 00:19:35.694 256+0 records out 00:19:35.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153188 s, 6.8 MB/s 00:19:35.694 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.694 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:35.953 256+0 records in 00:19:35.953 256+0 records out 00:19:35.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156155 s, 6.7 MB/s 00:19:35.953 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.953 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:35.953 256+0 records in 00:19:35.953 256+0 records out 00:19:35.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166761 s, 6.3 MB/s 00:19:35.953 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.953 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:36.212 256+0 records in 00:19:36.212 256+0 records out 00:19:36.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135652 s, 7.7 MB/s 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.212 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.471 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.729 11:27:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.988 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:37.246 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:37.246 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.505 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.764 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:38.022 11:27:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.022 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:38.022 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.022 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:38.280 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:38.538 malloc_lvol_verify 00:19:38.538 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:38.797 5458dc63-4baf-43af-a4a3-a3cf3f9e421a 00:19:38.797 11:28:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:39.056 c17803ce-046b-4224-8ade-a8811aa77cb6 00:19:39.056 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:39.314 /dev/nbd0 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:39.572 mke2fs 1.47.0 (5-Feb-2023) 00:19:39.572 Discarding device blocks: 0/4096 done 00:19:39.572 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:39.572 00:19:39.572 Allocating group tables: 0/1 done 00:19:39.572 Writing inode tables: 0/1 done 00:19:39.572 Creating journal (1024 blocks): done 00:19:39.572 Writing superblocks and filesystem accounting information: 0/1 done 00:19:39.572 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.572 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74278 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74278 ']' 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74278 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74278 00:19:39.830 killing process with pid 74278 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74278' 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74278 00:19:39.830 11:28:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74278 00:19:40.764 ************************************ 00:19:40.764 END TEST bdev_nbd 00:19:40.764 ************************************ 00:19:40.764 11:28:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:40.764 00:19:40.764 real 0m13.269s 00:19:40.764 user 0m19.117s 00:19:40.764 sys 0m4.195s 00:19:40.764 11:28:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.765 11:28:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:40.765 11:28:02 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:40.765 11:28:02 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:40.765 11:28:02 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:40.765 11:28:02 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:40.765 11:28:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.765 11:28:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.765 11:28:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.765 ************************************ 00:19:40.765 START TEST bdev_fio 00:19:40.765 ************************************ 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:40.765 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:40.765 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:41.023 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:41.024 11:28:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:41.024 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:41.024 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.024 11:28:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:41.024 ************************************ 00:19:41.024 START TEST bdev_fio_rw_verify 00:19:41.024 ************************************ 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:41.024 11:28:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:41.282 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:41.282 fio-3.35 00:19:41.282 Starting 6 threads 00:19:53.482 00:19:53.482 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74709: Tue Dec 10 11:28:14 2024 00:19:53.482 read: IOPS=29.0k, BW=113MiB/s (119MB/s)(1133MiB/10003msec) 00:19:53.482 slat (usec): min=3, max=711, avg= 7.08, stdev= 4.01 00:19:53.482 clat (usec): min=95, max=8118, avg=649.81, stdev=247.73 00:19:53.482 lat (usec): min=101, max=8126, avg=656.89, stdev=248.24 00:19:53.482 clat percentiles (usec): 00:19:53.482 | 50.000th=[ 668], 99.000th=[ 1205], 99.900th=[ 2769], 99.990th=[ 7439], 00:19:53.482 | 99.999th=[ 7963] 00:19:53.482 write: IOPS=29.3k, BW=114MiB/s (120MB/s)(1143MiB/10003msec); 0 zone resets 00:19:53.482 slat (usec): min=14, max=3077, avg=27.13, stdev=24.65 00:19:53.482 clat (usec): min=89, max=8277, avg=711.01, stdev=239.53 00:19:53.482 lat (usec): min=113, max=8299, avg=738.13, stdev=241.35 00:19:53.482 clat percentiles (usec): 00:19:53.482 | 50.000th=[ 725], 99.000th=[ 1303], 99.900th=[ 1844], 99.990th=[ 4490], 00:19:53.482 | 99.999th=[ 8160] 00:19:53.482 bw ( KiB/s): min=98294, max=142352, per=99.66%, avg=116653.89, stdev=2531.35, samples=114 00:19:53.482 iops : min=24572, max=35588, avg=29163.00, stdev=632.84, samples=114 00:19:53.483 lat (usec) : 100=0.01%, 250=2.47%, 500=19.58%, 750=38.77%, 1000=33.81% 00:19:53.483 lat (msec) : 2=5.27%, 4=0.08%, 10=0.02% 00:19:53.483 cpu : usr=61.27%, sys=25.74%, ctx=7770, majf=0, minf=24643 00:19:53.483 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.483 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.483 issued rwts: total=290082,292708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.483 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:53.483 00:19:53.483 Run status group 0 (all jobs): 00:19:53.483 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1133MiB (1188MB), run=10003-10003msec 00:19:53.483 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1143MiB (1199MB), run=10003-10003msec 00:19:53.483 ----------------------------------------------------- 00:19:53.483 Suppressions used: 00:19:53.483 count bytes template 00:19:53.483 6 48 /usr/src/fio/parse.c 00:19:53.483 2435 233760 /usr/src/fio/iolog.c 00:19:53.483 1 8 libtcmalloc_minimal.so 00:19:53.483 1 904 libcrypto.so 00:19:53.483 ----------------------------------------------------- 00:19:53.483 00:19:53.483 00:19:53.483 real 0m12.428s 00:19:53.483 user 0m38.726s 00:19:53.483 sys 0m15.777s 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:53.483 ************************************ 00:19:53.483 END TEST bdev_fio_rw_verify 00:19:53.483 ************************************ 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "91e151ca-dbe6-476a-9f5f-2c810fa970b4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91e151ca-dbe6-476a-9f5f-2c810fa970b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c98d167f-9161-41cb-9f88-19297b14c3d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c98d167f-9161-41cb-9f88-19297b14c3d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "2604089a-f640-42e9-9275-2d332c416ed5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2604089a-f640-42e9-9275-2d332c416ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ec777d9b-2725-485e-811b-d30cfedb6da7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ec777d9b-2725-485e-811b-d30cfedb6da7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e8e87fc0-6127-4fae-8952-9a427b536db3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e8e87fc0-6127-4fae-8952-9a427b536db3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cbb47305-97db-4d4c-bbd6-0f9aecc4afef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cbb47305-97db-4d4c-bbd6-0f9aecc4afef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:53.483 /home/vagrant/spdk_repo/spdk 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:53.483 00:19:53.483 real 0m12.605s 00:19:53.483 user 0m38.820s 00:19:53.483 sys 0m15.861s 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.483 11:28:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:53.483 ************************************ 00:19:53.483 END TEST bdev_fio 00:19:53.483 ************************************ 00:19:53.483 11:28:15 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:53.483 11:28:15 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:53.483 11:28:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:53.483 11:28:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.483 11:28:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:53.483 ************************************ 00:19:53.483 START TEST bdev_verify 00:19:53.483 ************************************ 00:19:53.483 11:28:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:53.742 [2024-12-10 11:28:15.683691] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:53.742 [2024-12-10 11:28:15.683870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74882 ] 00:19:53.742 [2024-12-10 11:28:15.864864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:54.006 [2024-12-10 11:28:15.968346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.006 [2024-12-10 11:28:15.968370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.282 Running I/O for 5 seconds... 00:19:56.592 22560.00 IOPS, 88.12 MiB/s [2024-12-10T11:28:19.691Z] 22864.00 IOPS, 89.31 MiB/s [2024-12-10T11:28:21.066Z] 23488.00 IOPS, 91.75 MiB/s [2024-12-10T11:28:21.634Z] 23072.00 IOPS, 90.12 MiB/s [2024-12-10T11:28:21.634Z] 22425.60 IOPS, 87.60 MiB/s 00:19:59.467 Latency(us) 00:19:59.467 [2024-12-10T11:28:21.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.467 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x0 length 0x80000 00:19:59.467 nvme0n1 : 5.04 1649.57 6.44 0.00 0.00 77447.91 11736.90 74353.57 00:19:59.467 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x80000 length 0x80000 00:19:59.467 nvme0n1 : 5.06 1670.98 6.53 0.00 0.00 76452.17 16086.11 72447.07 00:19:59.467 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x0 length 0x80000 00:19:59.467 nvme0n2 : 5.04 1652.11 6.45 0.00 0.00 77191.62 15192.44 67204.19 00:19:59.467 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x80000 length 0x80000 00:19:59.467 nvme0n2 : 5.06 1670.42 6.53 0.00 0.00 76343.74 12809.31 67204.19 00:19:59.467 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x0 length 0x80000 00:19:59.467 nvme0n3 : 5.05 1648.98 6.44 0.00 0.00 77188.65 12511.42 67204.19 00:19:59.467 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x80000 length 0x80000 00:19:59.467 nvme0n3 : 5.04 1676.36 6.55 0.00 0.00 75926.54 7983.48 72923.69 00:19:59.467 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x0 length 0x20000 00:19:59.467 nvme1n1 : 5.05 1648.47 6.44 0.00 0.00 77067.97 9115.46 75306.82 00:19:59.467 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x20000 length 0x20000 00:19:59.467 nvme1n1 : 5.06 1669.63 6.52 0.00 0.00 76084.65 13702.98 71493.82 00:19:59.467 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.467 Verification LBA range: start 0x0 length 0xbd0bd 00:19:59.467 nvme2n1 : 5.07 2813.84 10.99 0.00 0.00 45022.47 4408.79 73876.95 00:19:59.468 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.468 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:59.468 nvme2n1 : 5.06 2716.70 10.61 0.00 0.00 46619.30 4944.99 73876.95 00:19:59.468 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.468 Verification LBA range: start 0x0 length 0xa0000 00:19:59.468 nvme3n1 : 5.07 1667.36 6.51 0.00 0.00 75677.32 5957.82 77689.95 00:19:59.468 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.468 Verification LBA range: start 0xa0000 length 0xa0000 00:19:59.468 nvme3n1 : 5.07 1692.04 6.61 0.00 0.00 74651.37 3813.00 72923.69 00:19:59.468 [2024-12-10T11:28:21.635Z] =================================================================================================================== 00:19:59.468 [2024-12-10T11:28:21.635Z] Total : 22176.45 86.63 0.00 0.00 68748.22 3813.00 77689.95 00:20:00.404 00:20:00.404 real 0m6.990s 00:20:00.404 user 0m11.113s 00:20:00.404 sys 0m1.700s 00:20:00.404 11:28:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.404 11:28:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:00.404 ************************************ 00:20:00.404 END TEST bdev_verify 00:20:00.404 ************************************ 00:20:00.662 11:28:22 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:00.662 11:28:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:00.662 11:28:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.662 11:28:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.662 ************************************ 00:20:00.662 START TEST bdev_verify_big_io 00:20:00.662 ************************************ 00:20:00.662 11:28:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:00.662 [2024-12-10 11:28:22.728885] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:00.662 [2024-12-10 11:28:22.729059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74976 ] 00:20:00.921 [2024-12-10 11:28:22.913583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:00.921 [2024-12-10 11:28:23.027572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.921 [2024-12-10 11:28:23.027583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.486 Running I/O for 5 seconds... 00:20:07.330 1224.00 IOPS, 76.50 MiB/s [2024-12-10T11:28:29.755Z] 2459.00 IOPS, 153.69 MiB/s 00:20:07.588 Latency(us) 00:20:07.588 [2024-12-10T11:28:29.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.588 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.588 Verification LBA range: start 0x0 length 0x8000 00:20:07.588 nvme0n1 : 5.98 144.60 9.04 0.00 0.00 846933.45 6017.40 1182031.13 00:20:07.588 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.588 Verification LBA range: start 0x8000 length 0x8000 00:20:07.588 nvme0n1 : 5.96 96.57 6.04 0.00 0.00 1236646.94 189696.93 2287802.18 00:20:07.588 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x0 length 0x8000 00:20:07.589 nvme0n2 : 5.90 105.71 6.61 0.00 0.00 1120488.39 69110.69 1982761.89 00:20:07.589 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x8000 length 0x8000 00:20:07.589 nvme0n2 : 5.90 108.44 6.78 0.00 0.00 1050063.13 17992.61 934185.89 00:20:07.589 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x0 length 0x8000 00:20:07.589 nvme0n3 : 5.98 125.81 7.86 0.00 0.00 901756.35 88652.33 968502.92 00:20:07.589 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x8000 length 0x8000 00:20:07.589 nvme0n3 : 5.98 139.10 8.69 0.00 0.00 823740.26 77689.95 785478.75 00:20:07.589 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x0 length 0x2000 00:20:07.589 nvme1n1 : 5.98 134.77 8.42 0.00 0.00 835516.14 66727.56 2150534.05 00:20:07.589 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x2000 length 0x2000 00:20:07.589 nvme1n1 : 5.97 131.39 8.21 0.00 0.00 849331.97 71017.19 1830241.75 00:20:07.589 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x0 length 0xbd0b 00:20:07.589 nvme2n1 : 6.00 114.64 7.16 0.00 0.00 952675.22 12571.00 1776859.69 00:20:07.589 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:07.589 nvme2n1 : 5.97 128.57 8.04 0.00 0.00 860340.13 58148.31 1517575.45 00:20:07.589 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0x0 length 0xa000 00:20:07.589 nvme3n1 : 6.00 117.42 7.34 0.00 0.00 895166.92 3559.80 1822615.74 00:20:07.589 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:07.589 Verification LBA range: start 0xa000 length 0xa000 00:20:07.589 nvme3n1 : 5.98 149.72 9.36 0.00 0.00 713707.49 3381.06 999006.95 00:20:07.589 [2024-12-10T11:28:29.756Z] =================================================================================================================== 00:20:07.589 [2024-12-10T11:28:29.756Z] Total : 1496.74 93.55 0.00 0.00 907087.58 3381.06 2287802.18 00:20:08.964 00:20:08.964 real 0m8.290s 00:20:08.964 user 0m15.160s 00:20:08.964 sys 0m0.469s 00:20:08.964 11:28:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.964 11:28:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.964 ************************************ 00:20:08.964 END TEST bdev_verify_big_io 00:20:08.964 ************************************ 00:20:08.964 11:28:30 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:08.964 11:28:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:08.964 11:28:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.964 11:28:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:08.964 ************************************ 00:20:08.964 START TEST bdev_write_zeroes 00:20:08.964 ************************************ 00:20:08.964 11:28:30 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:08.964 [2024-12-10 11:28:31.073942] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:08.964 [2024-12-10 11:28:31.074132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75092 ] 00:20:09.222 [2024-12-10 11:28:31.258398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.222 [2024-12-10 11:28:31.366907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.789 Running I/O for 1 seconds... 00:20:10.725 71104.00 IOPS, 277.75 MiB/s 00:20:10.725 Latency(us) 00:20:10.725 [2024-12-10T11:28:32.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.725 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme0n1 : 1.03 10945.68 42.76 0.00 0.00 11679.80 5928.03 30384.87 00:20:10.725 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme0n2 : 1.03 10928.96 42.69 0.00 0.00 11689.51 6345.08 30504.03 00:20:10.725 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme0n3 : 1.03 10911.58 42.62 0.00 0.00 11698.60 6345.08 31218.97 00:20:10.725 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme1n1 : 1.03 10895.77 42.56 0.00 0.00 11706.23 6345.08 31457.28 00:20:10.725 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme2n1 : 1.02 15175.75 59.28 0.00 0.00 8394.94 3559.80 18230.92 00:20:10.725 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:10.725 nvme3n1 : 1.04 10879.32 42.50 0.00 0.00 11664.97 4230.05 27882.59 00:20:10.725 [2024-12-10T11:28:32.892Z] =================================================================================================================== 00:20:10.725 [2024-12-10T11:28:32.892Z] Total : 69737.05 272.41 0.00 0.00 10975.29 3559.80 31457.28 00:20:12.143 00:20:12.143 real 0m2.934s 00:20:12.143 user 0m2.208s 00:20:12.143 sys 0m0.529s 00:20:12.143 11:28:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.143 11:28:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:12.143 ************************************ 00:20:12.143 END TEST bdev_write_zeroes 00:20:12.143 ************************************ 00:20:12.143 11:28:33 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.143 11:28:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:12.143 11:28:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.143 11:28:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.143 ************************************ 00:20:12.143 START TEST bdev_json_nonenclosed 00:20:12.143 ************************************ 00:20:12.143 11:28:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.143 [2024-12-10 11:28:34.052469] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:12.143 [2024-12-10 11:28:34.052656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75146 ] 00:20:12.143 [2024-12-10 11:28:34.240200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.403 [2024-12-10 11:28:34.365409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.403 [2024-12-10 11:28:34.365565] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:12.403 [2024-12-10 11:28:34.365598] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:12.403 [2024-12-10 11:28:34.365615] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:12.661 00:20:12.661 real 0m0.682s 00:20:12.661 user 0m0.457s 00:20:12.661 sys 0m0.120s 00:20:12.661 11:28:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.661 11:28:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:12.661 ************************************ 00:20:12.661 END TEST bdev_json_nonenclosed 00:20:12.661 ************************************ 00:20:12.661 11:28:34 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.661 11:28:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:12.661 11:28:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.661 11:28:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.661 ************************************ 00:20:12.662 START TEST bdev_json_nonarray 00:20:12.662 ************************************ 00:20:12.662 11:28:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.662 [2024-12-10 11:28:34.784483] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:12.662 [2024-12-10 11:28:34.784678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75171 ] 00:20:12.920 [2024-12-10 11:28:34.967273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.920 [2024-12-10 11:28:35.073217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.920 [2024-12-10 11:28:35.073350] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:12.920 [2024-12-10 11:28:35.073378] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:12.920 [2024-12-10 11:28:35.073393] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:13.177 00:20:13.177 real 0m0.650s 00:20:13.177 user 0m0.438s 00:20:13.177 sys 0m0.107s 00:20:13.177 11:28:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.177 11:28:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:13.177 ************************************ 00:20:13.177 END TEST bdev_json_nonarray 00:20:13.177 ************************************ 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:13.435 11:28:35 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:14.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.907 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.474 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.474 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.474 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.732 00:20:16.732 real 0m58.415s 00:20:16.732 user 1m41.560s 00:20:16.732 sys 0m29.743s 00:20:16.732 11:28:38 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.732 11:28:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 ************************************ 00:20:16.732 END TEST blockdev_xnvme 00:20:16.732 ************************************ 00:20:16.732 11:28:38 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:16.732 11:28:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.732 11:28:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.732 11:28:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 ************************************ 00:20:16.732 START TEST ublk 00:20:16.732 ************************************ 00:20:16.732 11:28:38 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:16.732 * Looking for test storage... 00:20:16.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:16.732 11:28:38 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:16.732 11:28:38 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:16.732 11:28:38 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:16.990 11:28:38 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:16.990 11:28:38 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.991 11:28:38 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.991 11:28:38 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.991 11:28:38 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.991 11:28:38 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.991 11:28:38 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.991 11:28:38 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:16.991 11:28:38 ublk -- scripts/common.sh@345 -- # : 1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.991 11:28:38 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.991 11:28:38 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@353 -- # local d=1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.991 11:28:38 ublk -- scripts/common.sh@355 -- # echo 1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.991 11:28:38 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@353 -- # local d=2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.991 11:28:38 ublk -- scripts/common.sh@355 -- # echo 2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.991 11:28:38 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.991 11:28:38 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.991 11:28:38 ublk -- scripts/common.sh@368 -- # return 0 00:20:16.991 11:28:38 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.991 11:28:38 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:16.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.991 --rc genhtml_branch_coverage=1 00:20:16.991 --rc genhtml_function_coverage=1 00:20:16.991 --rc genhtml_legend=1 00:20:16.991 --rc geninfo_all_blocks=1 00:20:16.991 --rc geninfo_unexecuted_blocks=1 00:20:16.991 00:20:16.991 ' 00:20:16.991 11:28:38 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:16.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.991 --rc genhtml_branch_coverage=1 00:20:16.991 --rc genhtml_function_coverage=1 00:20:16.991 --rc genhtml_legend=1 00:20:16.991 --rc geninfo_all_blocks=1 00:20:16.991 --rc geninfo_unexecuted_blocks=1 00:20:16.991 00:20:16.991 ' 00:20:16.991 11:28:38 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:16.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.991 --rc genhtml_branch_coverage=1 00:20:16.991 --rc genhtml_function_coverage=1 00:20:16.991 --rc genhtml_legend=1 00:20:16.991 --rc geninfo_all_blocks=1 00:20:16.991 --rc geninfo_unexecuted_blocks=1 00:20:16.991 00:20:16.991 ' 00:20:16.991 11:28:38 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:16.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.991 --rc genhtml_branch_coverage=1 00:20:16.991 --rc genhtml_function_coverage=1 00:20:16.991 --rc genhtml_legend=1 00:20:16.991 --rc geninfo_all_blocks=1 00:20:16.991 --rc geninfo_unexecuted_blocks=1 00:20:16.991 00:20:16.991 ' 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:16.991 11:28:38 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:16.991 11:28:38 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:16.991 11:28:38 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:16.991 11:28:38 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:16.991 11:28:38 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:16.991 11:28:38 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:16.991 11:28:38 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:16.991 11:28:38 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:16.991 11:28:38 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:16.991 11:28:39 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:16.991 11:28:39 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.991 11:28:39 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.991 11:28:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:16.991 ************************************ 00:20:16.991 START TEST test_save_ublk_config 00:20:16.991 ************************************ 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75462 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75462 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75462 ']' 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.991 11:28:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:16.991 [2024-12-10 11:28:39.136775] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:16.991 [2024-12-10 11:28:39.137106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75462 ] 00:20:17.249 [2024-12-10 11:28:39.312656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.508 [2024-12-10 11:28:39.416705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.076 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:18.076 [2024-12-10 11:28:40.194704] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:18.076 [2024-12-10 11:28:40.195902] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:18.334 malloc0 00:20:18.334 [2024-12-10 11:28:40.270833] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:18.334 [2024-12-10 11:28:40.270968] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:18.334 [2024-12-10 11:28:40.270987] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:18.334 [2024-12-10 11:28:40.270998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:18.334 [2024-12-10 11:28:40.278367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:18.334 [2024-12-10 11:28:40.278405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:18.334 [2024-12-10 11:28:40.285695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:18.334 [2024-12-10 11:28:40.285814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:18.334 [2024-12-10 11:28:40.313678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:18.334 0 00:20:18.334 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.334 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:18.334 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.334 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:18.593 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.593 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:18.593 "subsystems": [ 00:20:18.593 { 00:20:18.593 "subsystem": "fsdev", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "fsdev_set_opts", 00:20:18.593 "params": { 00:20:18.593 "fsdev_io_pool_size": 65535, 00:20:18.593 "fsdev_io_cache_size": 256 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "keyring", 00:20:18.593 "config": [] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "iobuf", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "iobuf_set_options", 00:20:18.593 "params": { 00:20:18.593 "small_pool_count": 8192, 00:20:18.593 "large_pool_count": 1024, 00:20:18.593 "small_bufsize": 8192, 00:20:18.593 "large_bufsize": 135168, 00:20:18.593 "enable_numa": false 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "sock", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "sock_set_default_impl", 00:20:18.593 "params": { 00:20:18.593 "impl_name": "posix" 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "sock_impl_set_options", 00:20:18.593 "params": { 00:20:18.593 "impl_name": "ssl", 00:20:18.593 "recv_buf_size": 4096, 00:20:18.593 "send_buf_size": 4096, 00:20:18.593 "enable_recv_pipe": true, 00:20:18.593 "enable_quickack": false, 00:20:18.593 "enable_placement_id": 0, 00:20:18.593 "enable_zerocopy_send_server": true, 00:20:18.593 "enable_zerocopy_send_client": false, 00:20:18.593 "zerocopy_threshold": 0, 00:20:18.593 "tls_version": 0, 00:20:18.593 "enable_ktls": false 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "sock_impl_set_options", 00:20:18.593 "params": { 00:20:18.593 "impl_name": "posix", 00:20:18.593 "recv_buf_size": 2097152, 00:20:18.593 "send_buf_size": 2097152, 00:20:18.593 "enable_recv_pipe": true, 00:20:18.593 "enable_quickack": false, 00:20:18.593 "enable_placement_id": 0, 00:20:18.593 "enable_zerocopy_send_server": true, 00:20:18.593 "enable_zerocopy_send_client": false, 00:20:18.593 "zerocopy_threshold": 0, 00:20:18.593 "tls_version": 0, 00:20:18.593 "enable_ktls": false 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "vmd", 00:20:18.593 "config": [] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "accel", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "accel_set_options", 00:20:18.593 "params": { 00:20:18.593 "small_cache_size": 128, 00:20:18.593 "large_cache_size": 16, 00:20:18.593 "task_count": 2048, 00:20:18.593 "sequence_count": 2048, 00:20:18.593 "buf_count": 2048 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "bdev", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "bdev_set_options", 00:20:18.593 "params": { 00:20:18.593 "bdev_io_pool_size": 65535, 00:20:18.593 "bdev_io_cache_size": 256, 00:20:18.593 "bdev_auto_examine": true, 00:20:18.593 "iobuf_small_cache_size": 128, 00:20:18.593 "iobuf_large_cache_size": 16 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_raid_set_options", 00:20:18.593 "params": { 00:20:18.593 "process_window_size_kb": 1024, 00:20:18.593 "process_max_bandwidth_mb_sec": 0 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_iscsi_set_options", 00:20:18.593 "params": { 00:20:18.593 "timeout_sec": 30 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_nvme_set_options", 00:20:18.593 "params": { 00:20:18.593 "action_on_timeout": "none", 00:20:18.593 "timeout_us": 0, 00:20:18.593 "timeout_admin_us": 0, 00:20:18.593 "keep_alive_timeout_ms": 10000, 00:20:18.593 "arbitration_burst": 0, 00:20:18.593 "low_priority_weight": 0, 00:20:18.593 "medium_priority_weight": 0, 00:20:18.593 "high_priority_weight": 0, 00:20:18.593 "nvme_adminq_poll_period_us": 10000, 00:20:18.593 "nvme_ioq_poll_period_us": 0, 00:20:18.593 "io_queue_requests": 0, 00:20:18.593 "delay_cmd_submit": true, 00:20:18.593 "transport_retry_count": 4, 00:20:18.593 "bdev_retry_count": 3, 00:20:18.593 "transport_ack_timeout": 0, 00:20:18.593 "ctrlr_loss_timeout_sec": 0, 00:20:18.593 "reconnect_delay_sec": 0, 00:20:18.593 "fast_io_fail_timeout_sec": 0, 00:20:18.593 "disable_auto_failback": false, 00:20:18.593 "generate_uuids": false, 00:20:18.593 "transport_tos": 0, 00:20:18.593 "nvme_error_stat": false, 00:20:18.593 "rdma_srq_size": 0, 00:20:18.593 "io_path_stat": false, 00:20:18.593 "allow_accel_sequence": false, 00:20:18.593 "rdma_max_cq_size": 0, 00:20:18.593 "rdma_cm_event_timeout_ms": 0, 00:20:18.593 "dhchap_digests": [ 00:20:18.593 "sha256", 00:20:18.593 "sha384", 00:20:18.593 "sha512" 00:20:18.593 ], 00:20:18.593 "dhchap_dhgroups": [ 00:20:18.593 "null", 00:20:18.593 "ffdhe2048", 00:20:18.593 "ffdhe3072", 00:20:18.593 "ffdhe4096", 00:20:18.593 "ffdhe6144", 00:20:18.593 "ffdhe8192" 00:20:18.593 ] 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_nvme_set_hotplug", 00:20:18.593 "params": { 00:20:18.593 "period_us": 100000, 00:20:18.593 "enable": false 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_malloc_create", 00:20:18.593 "params": { 00:20:18.593 "name": "malloc0", 00:20:18.593 "num_blocks": 8192, 00:20:18.593 "block_size": 4096, 00:20:18.593 "physical_block_size": 4096, 00:20:18.593 "uuid": "490f27d0-47f9-4d4a-bb4f-2913dff4a331", 00:20:18.593 "optimal_io_boundary": 0, 00:20:18.593 "md_size": 0, 00:20:18.593 "dif_type": 0, 00:20:18.593 "dif_is_head_of_md": false, 00:20:18.593 "dif_pi_format": 0 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "bdev_wait_for_examine" 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "scsi", 00:20:18.593 "config": null 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "scheduler", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "framework_set_scheduler", 00:20:18.593 "params": { 00:20:18.593 "name": "static" 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "vhost_scsi", 00:20:18.593 "config": [] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "vhost_blk", 00:20:18.593 "config": [] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "ublk", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "ublk_create_target", 00:20:18.593 "params": { 00:20:18.593 "cpumask": "1" 00:20:18.593 } 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "method": "ublk_start_disk", 00:20:18.593 "params": { 00:20:18.593 "bdev_name": "malloc0", 00:20:18.593 "ublk_id": 0, 00:20:18.593 "num_queues": 1, 00:20:18.593 "queue_depth": 128 00:20:18.593 } 00:20:18.593 } 00:20:18.593 ] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "nbd", 00:20:18.593 "config": [] 00:20:18.593 }, 00:20:18.593 { 00:20:18.593 "subsystem": "nvmf", 00:20:18.593 "config": [ 00:20:18.593 { 00:20:18.593 "method": "nvmf_set_config", 00:20:18.593 "params": { 00:20:18.593 "discovery_filter": "match_any", 00:20:18.593 "admin_cmd_passthru": { 00:20:18.593 "identify_ctrlr": false 00:20:18.593 }, 00:20:18.593 "dhchap_digests": [ 00:20:18.593 "sha256", 00:20:18.593 "sha384", 00:20:18.593 "sha512" 00:20:18.593 ], 00:20:18.594 "dhchap_dhgroups": [ 00:20:18.594 "null", 00:20:18.594 "ffdhe2048", 00:20:18.594 "ffdhe3072", 00:20:18.594 "ffdhe4096", 00:20:18.594 "ffdhe6144", 00:20:18.594 "ffdhe8192" 00:20:18.594 ] 00:20:18.594 } 00:20:18.594 }, 00:20:18.594 { 00:20:18.594 "method": "nvmf_set_max_subsystems", 00:20:18.594 "params": { 00:20:18.594 "max_subsystems": 1024 00:20:18.594 } 00:20:18.594 }, 00:20:18.594 { 00:20:18.594 "method": "nvmf_set_crdt", 00:20:18.594 "params": { 00:20:18.594 "crdt1": 0, 00:20:18.594 "crdt2": 0, 00:20:18.594 "crdt3": 0 00:20:18.594 } 00:20:18.594 } 00:20:18.594 ] 00:20:18.594 }, 00:20:18.594 { 00:20:18.594 "subsystem": "iscsi", 00:20:18.594 "config": [ 00:20:18.594 { 00:20:18.594 "method": "iscsi_set_options", 00:20:18.594 "params": { 00:20:18.594 "node_base": "iqn.2016-06.io.spdk", 00:20:18.594 "max_sessions": 128, 00:20:18.594 "max_connections_per_session": 2, 00:20:18.594 "max_queue_depth": 64, 00:20:18.594 "default_time2wait": 2, 00:20:18.594 "default_time2retain": 20, 00:20:18.594 "first_burst_length": 8192, 00:20:18.594 "immediate_data": true, 00:20:18.594 "allow_duplicated_isid": false, 00:20:18.594 "error_recovery_level": 0, 00:20:18.594 "nop_timeout": 60, 00:20:18.594 "nop_in_interval": 30, 00:20:18.594 "disable_chap": false, 00:20:18.594 "require_chap": false, 00:20:18.594 "mutual_chap": false, 00:20:18.594 "chap_group": 0, 00:20:18.594 "max_large_datain_per_connection": 64, 00:20:18.594 "max_r2t_per_connection": 4, 00:20:18.594 "pdu_pool_size": 36864, 00:20:18.594 "immediate_data_pool_size": 16384, 00:20:18.594 "data_out_pool_size": 2048 00:20:18.594 } 00:20:18.594 } 00:20:18.594 ] 00:20:18.594 } 00:20:18.594 ] 00:20:18.594 }' 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75462 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75462 ']' 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75462 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75462 00:20:18.594 killing process with pid 75462 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75462' 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75462 00:20:18.594 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75462 00:20:20.008 [2024-12-10 11:28:42.080477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:20.008 [2024-12-10 11:28:42.120831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:20.009 [2024-12-10 11:28:42.121024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:20.009 [2024-12-10 11:28:42.128730] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:20.009 [2024-12-10 11:28:42.128795] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:20.009 [2024-12-10 11:28:42.128818] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:20.009 [2024-12-10 11:28:42.128852] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:20.009 [2024-12-10 11:28:42.129030] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:21.912 11:28:43 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75528 00:20:21.912 11:28:43 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75528 00:20:21.912 11:28:43 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:21.912 11:28:43 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:21.912 "subsystems": [ 00:20:21.912 { 00:20:21.912 "subsystem": "fsdev", 00:20:21.912 "config": [ 00:20:21.912 { 00:20:21.912 "method": "fsdev_set_opts", 00:20:21.912 "params": { 00:20:21.912 "fsdev_io_pool_size": 65535, 00:20:21.912 "fsdev_io_cache_size": 256 00:20:21.912 } 00:20:21.912 } 00:20:21.912 ] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "keyring", 00:20:21.912 "config": [] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "iobuf", 00:20:21.912 "config": [ 00:20:21.912 { 00:20:21.912 "method": "iobuf_set_options", 00:20:21.912 "params": { 00:20:21.912 "small_pool_count": 8192, 00:20:21.912 "large_pool_count": 1024, 00:20:21.912 "small_bufsize": 8192, 00:20:21.912 "large_bufsize": 135168, 00:20:21.912 "enable_numa": false 00:20:21.912 } 00:20:21.912 } 00:20:21.912 ] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "sock", 00:20:21.912 "config": [ 00:20:21.912 { 00:20:21.912 "method": "sock_set_default_impl", 00:20:21.912 "params": { 00:20:21.912 "impl_name": "posix" 00:20:21.912 } 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "method": "sock_impl_set_options", 00:20:21.912 "params": { 00:20:21.912 "impl_name": "ssl", 00:20:21.912 "recv_buf_size": 4096, 00:20:21.912 "send_buf_size": 4096, 00:20:21.912 "enable_recv_pipe": true, 00:20:21.912 "enable_quickack": false, 00:20:21.912 "enable_placement_id": 0, 00:20:21.912 "enable_zerocopy_send_server": true, 00:20:21.912 "enable_zerocopy_send_client": false, 00:20:21.912 "zerocopy_threshold": 0, 00:20:21.912 "tls_version": 0, 00:20:21.912 "enable_ktls": false 00:20:21.912 } 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "method": "sock_impl_set_options", 00:20:21.912 "params": { 00:20:21.912 "impl_name": "posix", 00:20:21.912 "recv_buf_size": 2097152, 00:20:21.912 "send_buf_size": 2097152, 00:20:21.912 "enable_recv_pipe": true, 00:20:21.912 "enable_quickack": false, 00:20:21.912 "enable_placement_id": 0, 00:20:21.912 "enable_zerocopy_send_server": true, 00:20:21.912 "enable_zerocopy_send_client": false, 00:20:21.912 "zerocopy_threshold": 0, 00:20:21.912 "tls_version": 0, 00:20:21.912 "enable_ktls": false 00:20:21.912 } 00:20:21.912 } 00:20:21.912 ] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "vmd", 00:20:21.912 "config": [] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "accel", 00:20:21.912 "config": [ 00:20:21.912 { 00:20:21.912 "method": "accel_set_options", 00:20:21.912 "params": { 00:20:21.912 "small_cache_size": 128, 00:20:21.912 "large_cache_size": 16, 00:20:21.912 "task_count": 2048, 00:20:21.912 "sequence_count": 2048, 00:20:21.912 "buf_count": 2048 00:20:21.912 } 00:20:21.912 } 00:20:21.912 ] 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "subsystem": "bdev", 00:20:21.912 "config": [ 00:20:21.912 { 00:20:21.912 "method": "bdev_set_options", 00:20:21.912 "params": { 00:20:21.912 "bdev_io_pool_size": 65535, 00:20:21.912 "bdev_io_cache_size": 256, 00:20:21.912 "bdev_auto_examine": true, 00:20:21.912 "iobuf_small_cache_size": 128, 00:20:21.912 "iobuf_large_cache_size": 16 00:20:21.912 } 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "method": "bdev_raid_set_options", 00:20:21.912 "params": { 00:20:21.912 "process_window_size_kb": 1024, 00:20:21.912 "process_max_bandwidth_mb_sec": 0 00:20:21.912 } 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "method": "bdev_iscsi_set_options", 00:20:21.912 "params": { 00:20:21.912 "timeout_sec": 30 00:20:21.912 } 00:20:21.912 }, 00:20:21.912 { 00:20:21.912 "method": "bdev_nvme_set_options", 00:20:21.912 "params": { 00:20:21.912 "action_on_timeout": "none", 00:20:21.912 "timeout_us": 0, 00:20:21.912 "timeout_admin_us": 0, 00:20:21.912 "keep_alive_timeout_ms": 10000, 00:20:21.912 "arbitration_burst": 0, 00:20:21.912 "low_priority_weight": 0, 00:20:21.912 "medium_priority_weight": 0, 00:20:21.912 "high_priority_weight": 0, 00:20:21.912 "nvme_adminq_poll_period_us": 10000, 00:20:21.912 "nvme_ioq_poll_period_us": 0, 00:20:21.912 "io_queue_requests": 0, 00:20:21.912 "delay_cmd_submit": true, 00:20:21.912 "transport_retry_count": 4, 00:20:21.912 "bdev_retry_count": 3, 00:20:21.912 "transport_ack_timeout": 0, 00:20:21.912 "ctrlr_loss_timeout_sec": 0, 00:20:21.912 "reconnect_delay_sec": 0, 00:20:21.912 "fast_io_fail_timeout_sec": 0, 00:20:21.912 "disable_auto_failback": false, 00:20:21.912 "generate_uuids": false, 00:20:21.912 "transport_tos": 0, 00:20:21.912 "nvme_error_stat": false, 00:20:21.912 "rdma_srq_size": 0, 00:20:21.912 "io_path_stat": false, 00:20:21.912 "allow_accel_sequence": false, 00:20:21.912 "rdma_max_cq_size": 0, 00:20:21.912 "rdma_cm_event_timeout_ms": 0, 00:20:21.912 "dhchap_digests": [ 00:20:21.912 "sha256", 00:20:21.912 "sha384", 00:20:21.913 "sha512" 00:20:21.913 ], 00:20:21.913 "dhchap_dhgroups": [ 00:20:21.913 "null", 00:20:21.913 "ffdhe2048", 00:20:21.913 "ffdhe3072", 00:20:21.913 "ffdhe4096", 00:20:21.913 "ffdhe6144", 00:20:21.913 "ffdhe8192" 00:20:21.913 ] 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "bdev_nvme_set_hotplug", 00:20:21.913 "params": { 00:20:21.913 "period_us": 100000, 00:20:21.913 "enable": false 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "bdev_malloc_create", 00:20:21.913 "params": { 00:20:21.913 "name": "malloc0", 00:20:21.913 "num_blocks": 8192, 00:20:21.913 "block_size": 4096, 00:20:21.913 "physical_block_size": 4096, 00:20:21.913 "uuid": "490f27d0-47f9-4d4a-bb4f-2913dff4a331", 00:20:21.913 "optimal_io_boundary": 0, 00:20:21.913 "md_size": 0, 00:20:21.913 "dif_type": 0, 00:20:21.913 "dif_is_head_of_md": false, 00:20:21.913 "dif_pi_format": 0 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "bdev_wait_for_examine" 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "scsi", 00:20:21.913 "config": null 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "scheduler", 00:20:21.913 "config": [ 00:20:21.913 { 00:20:21.913 "method": "framework_set_scheduler", 00:20:21.913 "params": { 00:20:21.913 "name": "static" 00:20:21.913 } 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "vhost_scsi", 00:20:21.913 "config": [] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "vhost_blk", 00:20:21.913 "config": [] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "ublk", 00:20:21.913 "config": [ 00:20:21.913 { 00:20:21.913 "method": "ublk_create_target", 00:20:21.913 "params": { 00:20:21.913 "cpumask": "1" 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "ublk_start_disk", 00:20:21.913 "params": { 00:20:21.913 "bdev_name": "malloc0", 00:20:21.913 "ublk_id": 0, 00:20:21.913 "num_queues": 1, 00:20:21.913 "queue_depth": 128 00:20:21.913 } 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "nbd", 00:20:21.913 "config": [] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "nvmf", 00:20:21.913 "config": [ 00:20:21.913 { 00:20:21.913 "method": "nvmf_set_config", 00:20:21.913 "params": { 00:20:21.913 "discovery_filter": "match_any", 00:20:21.913 "admin_cmd_passthru": { 00:20:21.913 "identify_ctrlr": false 00:20:21.913 }, 00:20:21.913 "dhchap_digests": [ 00:20:21.913 "sha256", 00:20:21.913 "sha384", 00:20:21.913 "sha512" 00:20:21.913 ], 00:20:21.913 "dhchap_dhgroups": [ 00:20:21.913 "null", 00:20:21.913 "ffdhe2048", 00:20:21.913 "ffdhe3072", 00:20:21.913 "ffdhe4096", 00:20:21.913 "ffdhe6144", 00:20:21.913 "ffdhe8192" 00:20:21.913 ] 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "nvmf_set_max_subsystems", 00:20:21.913 "params": { 00:20:21.913 "max_subsystems": 1024 00:20:21.913 } 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "method": "nvmf_set_crdt", 00:20:21.913 "params": { 00:20:21.913 "crdt1": 0, 00:20:21.913 "crdt2": 0, 00:20:21.913 "crdt3": 0 00:20:21.913 } 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 }, 00:20:21.913 { 00:20:21.913 "subsystem": "iscsi", 00:20:21.913 "config": [ 00:20:21.913 { 00:20:21.913 "method": "iscsi_set_options", 00:20:21.913 "params": { 00:20:21.913 "node_base": "iqn.2016-06.io.spdk", 00:20:21.913 "max_sessions": 128, 00:20:21.913 "max_connections_per_session": 2, 00:20:21.913 "max_queue_depth": 64, 00:20:21.913 "default_time2wait": 2, 00:20:21.913 "default_time2retain": 20, 00:20:21.913 "first_burst_length": 8192, 00:20:21.913 "immediate_data": true, 00:20:21.913 "allow_duplicated_isid": false, 00:20:21.913 "error_recovery_level": 0, 00:20:21.913 "nop_timeout": 60, 00:20:21.913 "nop_in_interval": 30, 00:20:21.913 "disable_chap": false, 00:20:21.913 "require_chap": false, 00:20:21.913 "mutual_chap": false, 00:20:21.913 "chap_group": 0, 00:20:21.913 "max_large_datain_per_connection": 64, 00:20:21.913 "max_r2t_per_connection": 4, 00:20:21.913 "pdu_pool_size": 36864, 00:20:21.913 "immediate_data_pool_size": 16384, 00:20:21.913 "data_out_pool_size": 2048 00:20:21.913 } 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 } 00:20:21.913 ] 00:20:21.913 }' 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75528 ']' 00:20:21.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.913 11:28:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:21.913 [2024-12-10 11:28:43.933248] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:21.913 [2024-12-10 11:28:43.933418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75528 ] 00:20:22.172 [2024-12-10 11:28:44.110598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.172 [2024-12-10 11:28:44.213322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.106 [2024-12-10 11:28:45.119709] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:23.106 [2024-12-10 11:28:45.120798] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:23.106 [2024-12-10 11:28:45.127822] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:23.106 [2024-12-10 11:28:45.127939] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:23.106 [2024-12-10 11:28:45.127960] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:23.106 [2024-12-10 11:28:45.127969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:23.106 [2024-12-10 11:28:45.135843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:23.106 [2024-12-10 11:28:45.135873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:23.106 [2024-12-10 11:28:45.143669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:23.106 [2024-12-10 11:28:45.143795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:23.106 [2024-12-10 11:28:45.158684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75528 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75528 ']' 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75528 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.106 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75528 00:20:23.364 killing process with pid 75528 00:20:23.364 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.364 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.364 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75528' 00:20:23.364 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75528 00:20:23.364 11:28:45 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75528 00:20:24.738 [2024-12-10 11:28:46.649835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:24.738 [2024-12-10 11:28:46.681846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:24.738 [2024-12-10 11:28:46.682046] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:24.738 [2024-12-10 11:28:46.691673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:24.738 [2024-12-10 11:28:46.691734] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:24.738 [2024-12-10 11:28:46.691747] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:24.738 [2024-12-10 11:28:46.691799] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:24.738 [2024-12-10 11:28:46.691990] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:26.641 11:28:48 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:26.641 ************************************ 00:20:26.641 END TEST test_save_ublk_config 00:20:26.641 ************************************ 00:20:26.641 00:20:26.641 real 0m9.375s 00:20:26.641 user 0m7.299s 00:20:26.641 sys 0m3.060s 00:20:26.641 11:28:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.641 11:28:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:26.641 11:28:48 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75614 00:20:26.641 11:28:48 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:26.641 11:28:48 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.641 11:28:48 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75614 00:20:26.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@835 -- # '[' -z 75614 ']' 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.641 11:28:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.641 [2024-12-10 11:28:48.554796] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:26.641 [2024-12-10 11:28:48.555263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75614 ] 00:20:26.641 [2024-12-10 11:28:48.739752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:26.900 [2024-12-10 11:28:48.842043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.900 [2024-12-10 11:28:48.842058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.467 11:28:49 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.467 11:28:49 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:27.467 11:28:49 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:27.467 11:28:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:27.467 11:28:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.467 11:28:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 ************************************ 00:20:27.467 START TEST test_create_ublk 00:20:27.467 ************************************ 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:27.467 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 [2024-12-10 11:28:49.621722] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:27.467 [2024-12-10 11:28:49.624244] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.467 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:27.467 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.467 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.726 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:27.726 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:27.726 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.726 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 [2024-12-10 11:28:49.879879] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:27.726 [2024-12-10 11:28:49.880429] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:27.726 [2024-12-10 11:28:49.880451] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:27.726 [2024-12-10 11:28:49.880462] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:27.726 [2024-12-10 11:28:49.888931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:27.726 [2024-12-10 11:28:49.889071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:27.985 [2024-12-10 11:28:49.895673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:27.985 [2024-12-10 11:28:49.896434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:27.985 [2024-12-10 11:28:49.925673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:27.985 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.985 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:27.985 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:27.985 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:27.985 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.985 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 11:28:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.985 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:27.985 { 00:20:27.985 "ublk_device": "/dev/ublkb0", 00:20:27.985 "id": 0, 00:20:27.985 "queue_depth": 512, 00:20:27.985 "num_queues": 4, 00:20:27.985 "bdev_name": "Malloc0" 00:20:27.985 } 00:20:27.985 ]' 00:20:27.985 11:28:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:27.985 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:28.243 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:28.243 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:28.244 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:28.244 11:28:50 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:28.244 11:28:50 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:28.244 fio: verification read phase will never start because write phase uses all of runtime 00:20:28.244 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:28.244 fio-3.35 00:20:28.244 Starting 1 process 00:20:40.473 00:20:40.473 fio_test: (groupid=0, jobs=1): err= 0: pid=75665: Tue Dec 10 11:29:00 2024 00:20:40.473 write: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(405MiB/10001msec); 0 zone resets 00:20:40.473 clat (usec): min=58, max=7835, avg=95.01, stdev=164.18 00:20:40.473 lat (usec): min=58, max=7836, avg=95.80, stdev=164.20 00:20:40.473 clat percentiles (usec): 00:20:40.473 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:20:40.473 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:20:40.473 | 70.00th=[ 87], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 106], 00:20:40.473 | 99.00th=[ 126], 99.50th=[ 161], 99.90th=[ 3326], 99.95th=[ 3589], 00:20:40.473 | 99.99th=[ 3982] 00:20:40.473 bw ( KiB/s): min=18544, max=43368, per=99.97%, avg=41435.37, stdev=5559.04, samples=19 00:20:40.473 iops : min= 4636, max=10842, avg=10358.84, stdev=1389.76, samples=19 00:20:40.473 lat (usec) : 100=92.15%, 250=7.37%, 500=0.01%, 750=0.03%, 1000=0.05% 00:20:40.473 lat (msec) : 2=0.14%, 4=0.25%, 10=0.01% 00:20:40.473 cpu : usr=2.81%, sys=6.24%, ctx=103629, majf=0, minf=796 00:20:40.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.473 issued rwts: total=0,103629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.473 00:20:40.473 Run status group 0 (all jobs): 00:20:40.473 WRITE: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=405MiB (424MB), run=10001-10001msec 00:20:40.473 00:20:40.473 Disk stats (read/write): 00:20:40.473 ublkb0: ios=0/102525, merge=0/0, ticks=0/9052, in_queue=9053, util=99.10% 00:20:40.473 11:29:00 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.473 [2024-12-10 11:29:00.449242] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.473 [2024-12-10 11:29:00.485741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.473 [2024-12-10 11:29:00.486292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.473 [2024-12-10 11:29:00.491671] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.473 [2024-12-10 11:29:00.492004] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:40.473 [2024-12-10 11:29:00.492026] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.473 11:29:00 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:40.473 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:00.506746] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:40.474 request: 00:20:40.474 { 00:20:40.474 "ublk_id": 0, 00:20:40.474 "method": "ublk_stop_disk", 00:20:40.474 "req_id": 1 00:20:40.474 } 00:20:40.474 Got JSON-RPC error response 00:20:40.474 response: 00:20:40.474 { 00:20:40.474 "code": -19, 00:20:40.474 "message": "No such device" 00:20:40.474 } 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.474 11:29:00 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:00.522757] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:40.474 [2024-12-10 11:29:00.530675] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:40.474 [2024-12-10 11:29:00.530738] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:00 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:40.474 11:29:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:40.474 00:20:40.474 real 0m11.673s 00:20:40.474 user 0m0.733s 00:20:40.474 sys 0m0.704s 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 ************************************ 00:20:40.474 END TEST test_create_ublk 00:20:40.474 ************************************ 00:20:40.474 11:29:01 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:40.474 11:29:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.474 11:29:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.474 11:29:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 ************************************ 00:20:40.474 START TEST test_create_multi_ublk 00:20:40.474 ************************************ 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:01.350667] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:40.474 [2024-12-10 11:29:01.353550] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:01.631931] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:40.474 [2024-12-10 11:29:01.632474] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:40.474 [2024-12-10 11:29:01.632499] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:40.474 [2024-12-10 11:29:01.632516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.474 [2024-12-10 11:29:01.640884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.474 [2024-12-10 11:29:01.640930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.474 [2024-12-10 11:29:01.647670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.474 [2024-12-10 11:29:01.648407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:40.474 [2024-12-10 11:29:01.667656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:01.927909] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:40.474 [2024-12-10 11:29:01.928423] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:40.474 [2024-12-10 11:29:01.928451] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:40.474 [2024-12-10 11:29:01.928464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.474 [2024-12-10 11:29:01.935677] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.474 [2024-12-10 11:29:01.935706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.474 [2024-12-10 11:29:01.942722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.474 [2024-12-10 11:29:01.943527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:40.474 [2024-12-10 11:29:01.957722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 [2024-12-10 11:29:02.216822] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:40.474 [2024-12-10 11:29:02.217315] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:40.474 [2024-12-10 11:29:02.217340] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:40.474 [2024-12-10 11:29:02.217355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.474 [2024-12-10 11:29:02.224667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.474 [2024-12-10 11:29:02.224695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.474 [2024-12-10 11:29:02.232704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.474 [2024-12-10 11:29:02.233435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:40.474 [2024-12-10 11:29:02.241700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:40.474 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.475 [2024-12-10 11:29:02.498873] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:40.475 [2024-12-10 11:29:02.499392] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:40.475 [2024-12-10 11:29:02.499421] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:40.475 [2024-12-10 11:29:02.499433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.475 [2024-12-10 11:29:02.507878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.475 [2024-12-10 11:29:02.507917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.475 [2024-12-10 11:29:02.514665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.475 [2024-12-10 11:29:02.515407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:40.475 [2024-12-10 11:29:02.518704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:40.475 { 00:20:40.475 "ublk_device": "/dev/ublkb0", 00:20:40.475 "id": 0, 00:20:40.475 "queue_depth": 512, 00:20:40.475 "num_queues": 4, 00:20:40.475 "bdev_name": "Malloc0" 00:20:40.475 }, 00:20:40.475 { 00:20:40.475 "ublk_device": "/dev/ublkb1", 00:20:40.475 "id": 1, 00:20:40.475 "queue_depth": 512, 00:20:40.475 "num_queues": 4, 00:20:40.475 "bdev_name": "Malloc1" 00:20:40.475 }, 00:20:40.475 { 00:20:40.475 "ublk_device": "/dev/ublkb2", 00:20:40.475 "id": 2, 00:20:40.475 "queue_depth": 512, 00:20:40.475 "num_queues": 4, 00:20:40.475 "bdev_name": "Malloc2" 00:20:40.475 }, 00:20:40.475 { 00:20:40.475 "ublk_device": "/dev/ublkb3", 00:20:40.475 "id": 3, 00:20:40.475 "queue_depth": 512, 00:20:40.475 "num_queues": 4, 00:20:40.475 "bdev_name": "Malloc3" 00:20:40.475 } 00:20:40.475 ]' 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:40.475 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:40.733 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:40.991 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:40.991 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:40.991 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.991 11:29:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:40.991 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:41.250 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 [2024-12-10 11:29:03.572919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:41.508 [2024-12-10 11:29:03.616720] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:41.508 [2024-12-10 11:29:03.617873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:41.508 [2024-12-10 11:29:03.625690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:41.508 [2024-12-10 11:29:03.626072] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:41.508 [2024-12-10 11:29:03.626095] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.508 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 [2024-12-10 11:29:03.633847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:41.508 [2024-12-10 11:29:03.664127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:41.508 [2024-12-10 11:29:03.665346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:41.508 [2024-12-10 11:29:03.671676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:41.508 [2024-12-10 11:29:03.672010] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:41.508 [2024-12-10 11:29:03.672039] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.767 [2024-12-10 11:29:03.687797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:41.767 [2024-12-10 11:29:03.717724] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:41.767 [2024-12-10 11:29:03.718753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:41.767 [2024-12-10 11:29:03.724675] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:41.767 [2024-12-10 11:29:03.725023] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:41.767 [2024-12-10 11:29:03.725049] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.767 [2024-12-10 11:29:03.731813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:41.767 [2024-12-10 11:29:03.760721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:41.767 [2024-12-10 11:29:03.761598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:41.767 [2024-12-10 11:29:03.769720] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:41.767 [2024-12-10 11:29:03.770048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:41.767 [2024-12-10 11:29:03.770068] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.767 11:29:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:42.026 [2024-12-10 11:29:04.063773] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:42.026 [2024-12-10 11:29:04.070694] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:42.026 [2024-12-10 11:29:04.070742] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:42.026 11:29:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:42.026 11:29:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:42.026 11:29:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:42.026 11:29:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.026 11:29:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.590 11:29:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.590 11:29:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:42.590 11:29:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:42.590 11:29:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.590 11:29:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.154 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.154 11:29:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:43.154 11:29:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:43.154 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.154 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.412 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.412 11:29:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:43.412 11:29:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:43.412 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.412 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:43.671 ************************************ 00:20:43.671 END TEST test_create_multi_ublk 00:20:43.671 ************************************ 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:43.671 00:20:43.671 real 0m4.444s 00:20:43.671 user 0m1.309s 00:20:43.671 sys 0m0.179s 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.671 11:29:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.671 11:29:05 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:43.671 11:29:05 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:43.671 11:29:05 ublk -- ublk/ublk.sh@130 -- # killprocess 75614 00:20:43.671 11:29:05 ublk -- common/autotest_common.sh@954 -- # '[' -z 75614 ']' 00:20:43.671 11:29:05 ublk -- common/autotest_common.sh@958 -- # kill -0 75614 00:20:43.671 11:29:05 ublk -- common/autotest_common.sh@959 -- # uname 00:20:43.671 11:29:05 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.671 11:29:05 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75614 00:20:43.930 killing process with pid 75614 00:20:43.930 11:29:05 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.930 11:29:05 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.930 11:29:05 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75614' 00:20:43.930 11:29:05 ublk -- common/autotest_common.sh@973 -- # kill 75614 00:20:43.930 11:29:05 ublk -- common/autotest_common.sh@978 -- # wait 75614 00:20:44.891 [2024-12-10 11:29:06.817775] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:44.891 [2024-12-10 11:29:06.817837] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:45.832 ************************************ 00:20:45.832 END TEST ublk 00:20:45.832 ************************************ 00:20:45.832 00:20:45.832 real 0m29.194s 00:20:45.832 user 0m42.746s 00:20:45.832 sys 0m9.534s 00:20:45.832 11:29:07 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.832 11:29:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:45.832 11:29:07 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:45.832 11:29:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:45.832 11:29:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.832 11:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:45.832 ************************************ 00:20:45.832 START TEST ublk_recovery 00:20:45.832 ************************************ 00:20:45.832 11:29:07 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:46.091 * Looking for test storage... 00:20:46.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.091 11:29:08 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:46.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.091 --rc genhtml_branch_coverage=1 00:20:46.091 --rc genhtml_function_coverage=1 00:20:46.091 --rc genhtml_legend=1 00:20:46.091 --rc geninfo_all_blocks=1 00:20:46.091 --rc geninfo_unexecuted_blocks=1 00:20:46.091 00:20:46.091 ' 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:46.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.091 --rc genhtml_branch_coverage=1 00:20:46.091 --rc genhtml_function_coverage=1 00:20:46.091 --rc genhtml_legend=1 00:20:46.091 --rc geninfo_all_blocks=1 00:20:46.091 --rc geninfo_unexecuted_blocks=1 00:20:46.091 00:20:46.091 ' 00:20:46.091 11:29:08 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:46.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.091 --rc genhtml_branch_coverage=1 00:20:46.091 --rc genhtml_function_coverage=1 00:20:46.091 --rc genhtml_legend=1 00:20:46.091 --rc geninfo_all_blocks=1 00:20:46.091 --rc geninfo_unexecuted_blocks=1 00:20:46.092 00:20:46.092 ' 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:46.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.092 --rc genhtml_branch_coverage=1 00:20:46.092 --rc genhtml_function_coverage=1 00:20:46.092 --rc genhtml_legend=1 00:20:46.092 --rc geninfo_all_blocks=1 00:20:46.092 --rc geninfo_unexecuted_blocks=1 00:20:46.092 00:20:46.092 ' 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:46.092 11:29:08 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:46.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76031 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76031 00:20:46.092 11:29:08 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76031 ']' 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.092 11:29:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.350 [2024-12-10 11:29:08.291895] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:46.350 [2024-12-10 11:29:08.292066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76031 ] 00:20:46.350 [2024-12-10 11:29:08.480009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.609 [2024-12-10 11:29:08.610307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.609 [2024-12-10 11:29:08.610317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:47.545 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.545 [2024-12-10 11:29:09.383799] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:47.545 [2024-12-10 11:29:09.386356] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.545 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.545 malloc0 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.545 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.545 [2024-12-10 11:29:09.517955] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:47.545 [2024-12-10 11:29:09.518134] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:47.545 [2024-12-10 11:29:09.518155] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:47.545 [2024-12-10 11:29:09.518165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:47.545 [2024-12-10 11:29:09.525700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:47.545 [2024-12-10 11:29:09.525727] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:47.545 [2024-12-10 11:29:09.533668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:47.545 [2024-12-10 11:29:09.533851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:47.545 [2024-12-10 11:29:09.556676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:47.545 1 00:20:47.545 11:29:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.545 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:48.481 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76066 00:20:48.481 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:48.481 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:48.739 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.739 fio-3.35 00:20:48.739 Starting 1 process 00:20:54.006 11:29:15 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76031 00:20:54.006 11:29:15 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:59.273 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76031 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:59.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.273 11:29:20 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76172 00:20:59.273 11:29:20 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:59.273 11:29:20 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.273 11:29:20 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76172 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76172 ']' 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.274 11:29:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.274 [2024-12-10 11:29:20.701480] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:59.274 [2024-12-10 11:29:20.701993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76172 ] 00:20:59.274 [2024-12-10 11:29:20.891273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.274 [2024-12-10 11:29:21.021222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.274 [2024-12-10 11:29:21.021233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:59.840 11:29:21 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.840 [2024-12-10 11:29:21.820737] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:59.840 [2024-12-10 11:29:21.823203] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.840 11:29:21 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.840 malloc0 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.840 11:29:21 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.840 [2024-12-10 11:29:21.950405] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:59.840 [2024-12-10 11:29:21.950506] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:59.840 [2024-12-10 11:29:21.950541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:59.840 [2024-12-10 11:29:21.957827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:59.840 [2024-12-10 11:29:21.957879] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:59.840 [2024-12-10 11:29:21.957909] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:59.840 [2024-12-10 11:29:21.958035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:59.840 1 00:20:59.840 11:29:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.840 11:29:21 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76066 00:20:59.840 [2024-12-10 11:29:21.965870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:59.840 [2024-12-10 11:29:21.973418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:59.840 [2024-12-10 11:29:21.980950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:59.840 [2024-12-10 11:29:21.981018] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:56.139 00:21:56.139 fio_test: (groupid=0, jobs=1): err= 0: pid=76069: Tue Dec 10 11:30:10 2024 00:21:56.139 read: IOPS=17.9k, BW=69.9MiB/s (73.2MB/s)(4191MiB/60005msec) 00:21:56.139 slat (nsec): min=1844, max=737986, avg=6251.57, stdev=3186.95 00:21:56.139 clat (usec): min=1112, max=6420.1k, avg=3519.53, stdev=49551.81 00:21:56.139 lat (usec): min=1121, max=6420.1k, avg=3525.78, stdev=49551.82 00:21:56.139 clat percentiles (usec): 00:21:56.139 | 1.00th=[ 2573], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2900], 00:21:56.139 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:21:56.139 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3326], 95.00th=[ 4047], 00:21:56.139 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[ 8029], 00:21:56.139 | 99.99th=[13173] 00:21:56.139 bw ( KiB/s): min=25464, max=87392, per=100.00%, avg=79579.53, stdev=7525.14, samples=107 00:21:56.139 iops : min= 6366, max=21848, avg=19894.87, stdev=1881.28, samples=107 00:21:56.139 write: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(4189MiB/60005msec); 0 zone resets 00:21:56.139 slat (nsec): min=1880, max=231815, avg=6514.39, stdev=3148.92 00:21:56.139 clat (usec): min=951, max=6420.3k, avg=3626.04, stdev=49565.51 00:21:56.139 lat (usec): min=957, max=6420.3k, avg=3632.55, stdev=49565.52 00:21:56.139 clat percentiles (usec): 00:21:56.139 | 1.00th=[ 2638], 5.00th=[ 2868], 10.00th=[ 2933], 20.00th=[ 3032], 00:21:56.139 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:21:56.139 | 70.00th=[ 3228], 80.00th=[ 3294], 90.00th=[ 3425], 95.00th=[ 3949], 00:21:56.139 | 99.00th=[ 5735], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 8225], 00:21:56.139 | 99.99th=[13173] 00:21:56.139 bw ( KiB/s): min=26184, max=84752, per=100.00%, avg=79513.43, stdev=7492.07, samples=107 00:21:56.139 iops : min= 6546, max=21188, avg=19878.35, stdev=1873.02, samples=107 00:21:56.139 lat (usec) : 1000=0.01% 00:21:56.139 lat (msec) : 2=0.06%, 4=94.92%, 10=5.00%, 20=0.01%, >=2000=0.01% 00:21:56.139 cpu : usr=10.18%, sys=21.25%, ctx=68193, majf=0, minf=14 00:21:56.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:56.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:56.139 issued rwts: total=1073021,1072401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:56.139 00:21:56.139 Run status group 0 (all jobs): 00:21:56.139 READ: bw=69.9MiB/s (73.2MB/s), 69.9MiB/s-69.9MiB/s (73.2MB/s-73.2MB/s), io=4191MiB (4395MB), run=60005-60005msec 00:21:56.139 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=4189MiB (4393MB), run=60005-60005msec 00:21:56.139 00:21:56.139 Disk stats (read/write): 00:21:56.139 ublkb1: ios=1070787/1070086, merge=0/0, ticks=3672418/3663937, in_queue=7336356, util=99.94% 00:21:56.139 11:30:10 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:56.139 11:30:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.139 11:30:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 [2024-12-10 11:30:10.848050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:56.139 [2024-12-10 11:30:10.875684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:56.139 [2024-12-10 11:30:10.875956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:56.139 [2024-12-10 11:30:10.882807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:56.139 [2024-12-10 11:30:10.882966] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:56.139 [2024-12-10 11:30:10.883016] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:56.139 11:30:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.139 11:30:10 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:56.139 11:30:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.139 11:30:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 [2024-12-10 11:30:10.897887] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:56.139 [2024-12-10 11:30:10.905728] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:56.140 [2024-12-10 11:30:10.905788] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.140 11:30:10 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:56.140 11:30:10 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:56.140 11:30:10 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76172 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76172 ']' 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76172 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76172 00:21:56.140 killing process with pid 76172 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76172' 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76172 00:21:56.140 11:30:10 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76172 00:21:56.140 [2024-12-10 11:30:12.371266] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:56.140 [2024-12-10 11:30:12.371331] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:56.140 00:21:56.140 real 1m5.614s 00:21:56.140 user 1m47.355s 00:21:56.140 sys 0m31.944s 00:21:56.140 11:30:13 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.140 ************************************ 00:21:56.140 11:30:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 END TEST ublk_recovery 00:21:56.140 ************************************ 00:21:56.140 11:30:13 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:56.140 11:30:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:56.140 11:30:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.140 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 11:30:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:56.140 11:30:13 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:56.140 11:30:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.140 11:30:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.140 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 ************************************ 00:21:56.140 START TEST ftl 00:21:56.140 ************************************ 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:56.140 * Looking for test storage... 00:21:56.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.140 11:30:13 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.140 11:30:13 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.140 11:30:13 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.140 11:30:13 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.140 11:30:13 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.140 11:30:13 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:56.140 11:30:13 ftl -- scripts/common.sh@345 -- # : 1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.140 11:30:13 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.140 11:30:13 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@353 -- # local d=1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.140 11:30:13 ftl -- scripts/common.sh@355 -- # echo 1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.140 11:30:13 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@353 -- # local d=2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.140 11:30:13 ftl -- scripts/common.sh@355 -- # echo 2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.140 11:30:13 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.140 11:30:13 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.140 11:30:13 ftl -- scripts/common.sh@368 -- # return 0 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.140 --rc genhtml_branch_coverage=1 00:21:56.140 --rc genhtml_function_coverage=1 00:21:56.140 --rc genhtml_legend=1 00:21:56.140 --rc geninfo_all_blocks=1 00:21:56.140 --rc geninfo_unexecuted_blocks=1 00:21:56.140 00:21:56.140 ' 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.140 --rc genhtml_branch_coverage=1 00:21:56.140 --rc genhtml_function_coverage=1 00:21:56.140 --rc genhtml_legend=1 00:21:56.140 --rc geninfo_all_blocks=1 00:21:56.140 --rc geninfo_unexecuted_blocks=1 00:21:56.140 00:21:56.140 ' 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.140 --rc genhtml_branch_coverage=1 00:21:56.140 --rc genhtml_function_coverage=1 00:21:56.140 --rc genhtml_legend=1 00:21:56.140 --rc geninfo_all_blocks=1 00:21:56.140 --rc geninfo_unexecuted_blocks=1 00:21:56.140 00:21:56.140 ' 00:21:56.140 11:30:13 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.140 --rc genhtml_branch_coverage=1 00:21:56.140 --rc genhtml_function_coverage=1 00:21:56.140 --rc genhtml_legend=1 00:21:56.140 --rc geninfo_all_blocks=1 00:21:56.140 --rc geninfo_unexecuted_blocks=1 00:21:56.140 00:21:56.140 ' 00:21:56.140 11:30:13 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:56.140 11:30:13 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:56.140 11:30:13 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:56.140 11:30:13 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:56.140 11:30:13 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:56.140 11:30:13 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:56.140 11:30:13 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.140 11:30:13 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.140 11:30:13 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.140 11:30:13 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:56.140 11:30:13 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:56.140 11:30:13 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:56.140 11:30:13 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:56.140 11:30:13 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.140 11:30:13 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.140 11:30:13 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:56.140 11:30:13 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:56.140 11:30:13 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:56.141 11:30:13 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:56.141 11:30:13 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:56.141 11:30:13 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:56.141 11:30:13 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:56.141 11:30:13 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:56.141 11:30:13 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:56.141 11:30:13 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:56.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:56.141 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:56.141 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:56.141 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:56.141 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:56.141 11:30:14 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76963 00:21:56.141 11:30:14 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:56.141 11:30:14 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76963 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@835 -- # '[' -z 76963 ']' 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.141 11:30:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:56.141 [2024-12-10 11:30:14.574397] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:21:56.141 [2024-12-10 11:30:14.574567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:21:56.141 [2024-12-10 11:30:14.760300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.141 [2024-12-10 11:30:14.886517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.141 11:30:15 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.141 11:30:15 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:56.141 11:30:15 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:56.141 11:30:15 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:56.141 11:30:16 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:56.141 11:30:16 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@50 -- # break 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@63 -- # break 00:21:56.141 11:30:17 ftl -- ftl/ftl.sh@66 -- # killprocess 76963 00:21:56.141 11:30:17 ftl -- common/autotest_common.sh@954 -- # '[' -z 76963 ']' 00:21:56.141 11:30:17 ftl -- common/autotest_common.sh@958 -- # kill -0 76963 00:21:56.141 11:30:17 ftl -- common/autotest_common.sh@959 -- # uname 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76963 00:21:56.141 killing process with pid 76963 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76963' 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@973 -- # kill 76963 00:21:56.141 11:30:18 ftl -- common/autotest_common.sh@978 -- # wait 76963 00:21:58.040 11:30:20 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:58.040 11:30:20 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:58.040 11:30:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.040 11:30:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.040 11:30:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:58.040 ************************************ 00:21:58.040 START TEST ftl_fio_basic 00:21:58.040 ************************************ 00:21:58.040 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:58.040 * Looking for test storage... 00:21:58.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.040 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.040 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.040 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.298 --rc genhtml_branch_coverage=1 00:21:58.298 --rc genhtml_function_coverage=1 00:21:58.298 --rc genhtml_legend=1 00:21:58.298 --rc geninfo_all_blocks=1 00:21:58.298 --rc geninfo_unexecuted_blocks=1 00:21:58.298 00:21:58.298 ' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.298 --rc genhtml_branch_coverage=1 00:21:58.298 --rc genhtml_function_coverage=1 00:21:58.298 --rc genhtml_legend=1 00:21:58.298 --rc geninfo_all_blocks=1 00:21:58.298 --rc geninfo_unexecuted_blocks=1 00:21:58.298 00:21:58.298 ' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.298 --rc genhtml_branch_coverage=1 00:21:58.298 --rc genhtml_function_coverage=1 00:21:58.298 --rc genhtml_legend=1 00:21:58.298 --rc geninfo_all_blocks=1 00:21:58.298 --rc geninfo_unexecuted_blocks=1 00:21:58.298 00:21:58.298 ' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.298 --rc genhtml_branch_coverage=1 00:21:58.298 --rc genhtml_function_coverage=1 00:21:58.298 --rc genhtml_legend=1 00:21:58.298 --rc geninfo_all_blocks=1 00:21:58.298 --rc geninfo_unexecuted_blocks=1 00:21:58.298 00:21:58.298 ' 00:21:58.298 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77107 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77107 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77107 ']' 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.299 11:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.299 [2024-12-10 11:30:20.362463] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:21:58.299 [2024-12-10 11:30:20.362876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77107 ] 00:21:58.557 [2024-12-10 11:30:20.542826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.557 [2024-12-10 11:30:20.646098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.557 [2024-12-10 11:30:20.646189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.557 [2024-12-10 11:30:20.646207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:59.492 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:59.750 11:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:00.008 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:00.008 { 00:22:00.008 "name": "nvme0n1", 00:22:00.008 "aliases": [ 00:22:00.008 "324e835b-d492-4575-9a24-20bf7507c284" 00:22:00.008 ], 00:22:00.008 "product_name": "NVMe disk", 00:22:00.008 "block_size": 4096, 00:22:00.008 "num_blocks": 1310720, 00:22:00.008 "uuid": "324e835b-d492-4575-9a24-20bf7507c284", 00:22:00.008 "numa_id": -1, 00:22:00.008 "assigned_rate_limits": { 00:22:00.008 "rw_ios_per_sec": 0, 00:22:00.008 "rw_mbytes_per_sec": 0, 00:22:00.008 "r_mbytes_per_sec": 0, 00:22:00.008 "w_mbytes_per_sec": 0 00:22:00.008 }, 00:22:00.008 "claimed": false, 00:22:00.008 "zoned": false, 00:22:00.008 "supported_io_types": { 00:22:00.008 "read": true, 00:22:00.008 "write": true, 00:22:00.008 "unmap": true, 00:22:00.008 "flush": true, 00:22:00.008 "reset": true, 00:22:00.008 "nvme_admin": true, 00:22:00.008 "nvme_io": true, 00:22:00.008 "nvme_io_md": false, 00:22:00.008 "write_zeroes": true, 00:22:00.008 "zcopy": false, 00:22:00.008 "get_zone_info": false, 00:22:00.008 "zone_management": false, 00:22:00.008 "zone_append": false, 00:22:00.008 "compare": true, 00:22:00.008 "compare_and_write": false, 00:22:00.008 "abort": true, 00:22:00.008 "seek_hole": false, 00:22:00.008 "seek_data": false, 00:22:00.008 "copy": true, 00:22:00.008 "nvme_iov_md": false 00:22:00.008 }, 00:22:00.008 "driver_specific": { 00:22:00.008 "nvme": [ 00:22:00.008 { 00:22:00.008 "pci_address": "0000:00:11.0", 00:22:00.008 "trid": { 00:22:00.008 "trtype": "PCIe", 00:22:00.008 "traddr": "0000:00:11.0" 00:22:00.008 }, 00:22:00.008 "ctrlr_data": { 00:22:00.008 "cntlid": 0, 00:22:00.008 "vendor_id": "0x1b36", 00:22:00.008 "model_number": "QEMU NVMe Ctrl", 00:22:00.008 "serial_number": "12341", 00:22:00.008 "firmware_revision": "8.0.0", 00:22:00.008 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:00.008 "oacs": { 00:22:00.008 "security": 0, 00:22:00.008 "format": 1, 00:22:00.008 "firmware": 0, 00:22:00.008 "ns_manage": 1 00:22:00.008 }, 00:22:00.008 "multi_ctrlr": false, 00:22:00.008 "ana_reporting": false 00:22:00.008 }, 00:22:00.008 "vs": { 00:22:00.008 "nvme_version": "1.4" 00:22:00.008 }, 00:22:00.008 "ns_data": { 00:22:00.008 "id": 1, 00:22:00.008 "can_share": false 00:22:00.008 } 00:22:00.008 } 00:22:00.008 ], 00:22:00.008 "mp_policy": "active_passive" 00:22:00.008 } 00:22:00.008 } 00:22:00.008 ]' 00:22:00.008 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:00.008 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:00.009 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.574 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:00.574 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:00.832 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a359efec-c5e8-43b7-bb94-d3b4d5c070f2 00:22:00.832 11:30:22 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a359efec-c5e8-43b7-bb94-d3b4d5c070f2 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:01.089 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:01.348 { 00:22:01.348 "name": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:01.348 "aliases": [ 00:22:01.348 "lvs/nvme0n1p0" 00:22:01.348 ], 00:22:01.348 "product_name": "Logical Volume", 00:22:01.348 "block_size": 4096, 00:22:01.348 "num_blocks": 26476544, 00:22:01.348 "uuid": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:01.348 "assigned_rate_limits": { 00:22:01.348 "rw_ios_per_sec": 0, 00:22:01.348 "rw_mbytes_per_sec": 0, 00:22:01.348 "r_mbytes_per_sec": 0, 00:22:01.348 "w_mbytes_per_sec": 0 00:22:01.348 }, 00:22:01.348 "claimed": false, 00:22:01.348 "zoned": false, 00:22:01.348 "supported_io_types": { 00:22:01.348 "read": true, 00:22:01.348 "write": true, 00:22:01.348 "unmap": true, 00:22:01.348 "flush": false, 00:22:01.348 "reset": true, 00:22:01.348 "nvme_admin": false, 00:22:01.348 "nvme_io": false, 00:22:01.348 "nvme_io_md": false, 00:22:01.348 "write_zeroes": true, 00:22:01.348 "zcopy": false, 00:22:01.348 "get_zone_info": false, 00:22:01.348 "zone_management": false, 00:22:01.348 "zone_append": false, 00:22:01.348 "compare": false, 00:22:01.348 "compare_and_write": false, 00:22:01.348 "abort": false, 00:22:01.348 "seek_hole": true, 00:22:01.348 "seek_data": true, 00:22:01.348 "copy": false, 00:22:01.348 "nvme_iov_md": false 00:22:01.348 }, 00:22:01.348 "driver_specific": { 00:22:01.348 "lvol": { 00:22:01.348 "lvol_store_uuid": "a359efec-c5e8-43b7-bb94-d3b4d5c070f2", 00:22:01.348 "base_bdev": "nvme0n1", 00:22:01.348 "thin_provision": true, 00:22:01.348 "num_allocated_clusters": 0, 00:22:01.348 "snapshot": false, 00:22:01.348 "clone": false, 00:22:01.348 "esnap_clone": false 00:22:01.348 } 00:22:01.348 } 00:22:01.348 } 00:22:01.348 ]' 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:01.348 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:01.349 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:01.349 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:01.607 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b6a45a23-e711-48b2-8486-7691e853a203 00:22:01.865 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:01.865 { 00:22:01.865 "name": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:01.865 "aliases": [ 00:22:01.865 "lvs/nvme0n1p0" 00:22:01.865 ], 00:22:01.865 "product_name": "Logical Volume", 00:22:01.865 "block_size": 4096, 00:22:01.865 "num_blocks": 26476544, 00:22:01.865 "uuid": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:01.865 "assigned_rate_limits": { 00:22:01.865 "rw_ios_per_sec": 0, 00:22:01.865 "rw_mbytes_per_sec": 0, 00:22:01.865 "r_mbytes_per_sec": 0, 00:22:01.865 "w_mbytes_per_sec": 0 00:22:01.865 }, 00:22:01.865 "claimed": false, 00:22:01.865 "zoned": false, 00:22:01.865 "supported_io_types": { 00:22:01.865 "read": true, 00:22:01.865 "write": true, 00:22:01.865 "unmap": true, 00:22:01.865 "flush": false, 00:22:01.865 "reset": true, 00:22:01.865 "nvme_admin": false, 00:22:01.865 "nvme_io": false, 00:22:01.865 "nvme_io_md": false, 00:22:01.865 "write_zeroes": true, 00:22:01.865 "zcopy": false, 00:22:01.865 "get_zone_info": false, 00:22:01.865 "zone_management": false, 00:22:01.865 "zone_append": false, 00:22:01.865 "compare": false, 00:22:01.865 "compare_and_write": false, 00:22:01.865 "abort": false, 00:22:01.865 "seek_hole": true, 00:22:01.865 "seek_data": true, 00:22:01.865 "copy": false, 00:22:01.865 "nvme_iov_md": false 00:22:01.865 }, 00:22:01.865 "driver_specific": { 00:22:01.865 "lvol": { 00:22:01.865 "lvol_store_uuid": "a359efec-c5e8-43b7-bb94-d3b4d5c070f2", 00:22:01.865 "base_bdev": "nvme0n1", 00:22:01.865 "thin_provision": true, 00:22:01.865 "num_allocated_clusters": 0, 00:22:01.865 "snapshot": false, 00:22:01.865 "clone": false, 00:22:01.865 "esnap_clone": false 00:22:01.865 } 00:22:01.865 } 00:22:01.865 } 00:22:01.865 ]' 00:22:01.865 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:02.123 11:30:24 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:02.392 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b6a45a23-e711-48b2-8486-7691e853a203 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b6a45a23-e711-48b2-8486-7691e853a203 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:02.392 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b6a45a23-e711-48b2-8486-7691e853a203 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:02.680 { 00:22:02.680 "name": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:02.680 "aliases": [ 00:22:02.680 "lvs/nvme0n1p0" 00:22:02.680 ], 00:22:02.680 "product_name": "Logical Volume", 00:22:02.680 "block_size": 4096, 00:22:02.680 "num_blocks": 26476544, 00:22:02.680 "uuid": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:02.680 "assigned_rate_limits": { 00:22:02.680 "rw_ios_per_sec": 0, 00:22:02.680 "rw_mbytes_per_sec": 0, 00:22:02.680 "r_mbytes_per_sec": 0, 00:22:02.680 "w_mbytes_per_sec": 0 00:22:02.680 }, 00:22:02.680 "claimed": false, 00:22:02.680 "zoned": false, 00:22:02.680 "supported_io_types": { 00:22:02.680 "read": true, 00:22:02.680 "write": true, 00:22:02.680 "unmap": true, 00:22:02.680 "flush": false, 00:22:02.680 "reset": true, 00:22:02.680 "nvme_admin": false, 00:22:02.680 "nvme_io": false, 00:22:02.680 "nvme_io_md": false, 00:22:02.680 "write_zeroes": true, 00:22:02.680 "zcopy": false, 00:22:02.680 "get_zone_info": false, 00:22:02.680 "zone_management": false, 00:22:02.680 "zone_append": false, 00:22:02.680 "compare": false, 00:22:02.680 "compare_and_write": false, 00:22:02.680 "abort": false, 00:22:02.680 "seek_hole": true, 00:22:02.680 "seek_data": true, 00:22:02.680 "copy": false, 00:22:02.680 "nvme_iov_md": false 00:22:02.680 }, 00:22:02.680 "driver_specific": { 00:22:02.680 "lvol": { 00:22:02.680 "lvol_store_uuid": "a359efec-c5e8-43b7-bb94-d3b4d5c070f2", 00:22:02.680 "base_bdev": "nvme0n1", 00:22:02.680 "thin_provision": true, 00:22:02.680 "num_allocated_clusters": 0, 00:22:02.680 "snapshot": false, 00:22:02.680 "clone": false, 00:22:02.680 "esnap_clone": false 00:22:02.680 } 00:22:02.680 } 00:22:02.680 } 00:22:02.680 ]' 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:02.680 11:30:24 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b6a45a23-e711-48b2-8486-7691e853a203 -c nvc0n1p0 --l2p_dram_limit 60 00:22:02.939 [2024-12-10 11:30:24.997993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:24.998066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.939 [2024-12-10 11:30:24.998111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.939 [2024-12-10 11:30:24.998126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:24.998224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:24.998246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.939 [2024-12-10 11:30:24.998266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:02.939 [2024-12-10 11:30:24.998296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:24.998368] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.939 [2024-12-10 11:30:24.999398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.939 [2024-12-10 11:30:24.999613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:24.999652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.939 [2024-12-10 11:30:24.999672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:22:02.939 [2024-12-10 11:30:24.999687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:24.999928] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 12cdde72-4e19-4953-babc-e678b6cc4014 00:22:02.939 [2024-12-10 11:30:25.001147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.001200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:02.939 [2024-12-10 11:30:25.001221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:02.939 [2024-12-10 11:30:25.001238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.006314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.006406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.939 [2024-12-10 11:30:25.006429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:22:02.939 [2024-12-10 11:30:25.006446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.006618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.006671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.939 [2024-12-10 11:30:25.006689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:22:02.939 [2024-12-10 11:30:25.006710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.006798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.006821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.939 [2024-12-10 11:30:25.006836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:02.939 [2024-12-10 11:30:25.006852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.006893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.939 [2024-12-10 11:30:25.011516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.011568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.939 [2024-12-10 11:30:25.011611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:22:02.939 [2024-12-10 11:30:25.011629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.011743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.011766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.939 [2024-12-10 11:30:25.011797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:02.939 [2024-12-10 11:30:25.011839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.011944] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:02.939 [2024-12-10 11:30:25.012172] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.939 [2024-12-10 11:30:25.012219] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.939 [2024-12-10 11:30:25.012239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.939 [2024-12-10 11:30:25.012262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.939 [2024-12-10 11:30:25.012280] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.939 [2024-12-10 11:30:25.012298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.939 [2024-12-10 11:30:25.012311] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.939 [2024-12-10 11:30:25.012334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.939 [2024-12-10 11:30:25.012357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.939 [2024-12-10 11:30:25.012387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.012406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.939 [2024-12-10 11:30:25.012424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:22:02.939 [2024-12-10 11:30:25.012439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-10 11:30:25.012576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-10 11:30:25.012600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.939 [2024-12-10 11:30:25.012617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:02.939 [2024-12-10 11:30:25.012647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.940 [2024-12-10 11:30:25.012803] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.940 [2024-12-10 11:30:25.012824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.940 [2024-12-10 11:30:25.012846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.940 [2024-12-10 11:30:25.012860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.012877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.940 [2024-12-10 11:30:25.012890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.012909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.940 [2024-12-10 11:30:25.012923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.940 [2024-12-10 11:30:25.012940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.940 [2024-12-10 11:30:25.012963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.940 [2024-12-10 11:30:25.012991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.940 [2024-12-10 11:30:25.013007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.940 [2024-12-10 11:30:25.013023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.940 [2024-12-10 11:30:25.013036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.940 [2024-12-10 11:30:25.013052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.940 [2024-12-10 11:30:25.013065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.940 [2024-12-10 11:30:25.013097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.940 [2024-12-10 11:30:25.013142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.940 [2024-12-10 11:30:25.013189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.940 [2024-12-10 11:30:25.013259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.940 [2024-12-10 11:30:25.013307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.940 [2024-12-10 11:30:25.013355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.940 [2024-12-10 11:30:25.013407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.940 [2024-12-10 11:30:25.013421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.940 [2024-12-10 11:30:25.013436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.940 [2024-12-10 11:30:25.013452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.940 [2024-12-10 11:30:25.013478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.940 [2024-12-10 11:30:25.013503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.940 [2024-12-10 11:30:25.013543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.940 [2024-12-10 11:30:25.013559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013572] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.940 [2024-12-10 11:30:25.013589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.940 [2024-12-10 11:30:25.013604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-10 11:30:25.013651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.940 [2024-12-10 11:30:25.013672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.940 [2024-12-10 11:30:25.013687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.940 [2024-12-10 11:30:25.013713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.940 [2024-12-10 11:30:25.013737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.940 [2024-12-10 11:30:25.013760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.940 [2024-12-10 11:30:25.013783] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.940 [2024-12-10 11:30:25.013806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.013829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.940 [2024-12-10 11:30:25.013854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.940 [2024-12-10 11:30:25.013870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.940 [2024-12-10 11:30:25.013889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.940 [2024-12-10 11:30:25.013903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.940 [2024-12-10 11:30:25.013920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.940 [2024-12-10 11:30:25.013934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.940 [2024-12-10 11:30:25.013950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.940 [2024-12-10 11:30:25.013964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.940 [2024-12-10 11:30:25.013982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.013996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.014012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.014026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.014048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.940 [2024-12-10 11:30:25.014072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.940 [2024-12-10 11:30:25.014108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.014130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.940 [2024-12-10 11:30:25.014147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.940 [2024-12-10 11:30:25.014161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.940 [2024-12-10 11:30:25.014177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.940 [2024-12-10 11:30:25.014193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.940 [2024-12-10 11:30:25.014210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.940 [2024-12-10 11:30:25.014224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:22:02.940 [2024-12-10 11:30:25.014241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.940 [2024-12-10 11:30:25.014336] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:02.940 [2024-12-10 11:30:25.014368] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:06.218 [2024-12-10 11:30:28.255872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.256121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:06.218 [2024-12-10 11:30:28.256159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3241.557 ms 00:22:06.218 [2024-12-10 11:30:28.256196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.288974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.289046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.218 [2024-12-10 11:30:28.289073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.502 ms 00:22:06.218 [2024-12-10 11:30:28.289096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.289287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.289318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.218 [2024-12-10 11:30:28.289336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:06.218 [2024-12-10 11:30:28.289359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.346776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.346845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.218 [2024-12-10 11:30:28.346876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.342 ms 00:22:06.218 [2024-12-10 11:30:28.346894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.346963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.346986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.218 [2024-12-10 11:30:28.347002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.218 [2024-12-10 11:30:28.347019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.347471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.347530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.218 [2024-12-10 11:30:28.347551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:22:06.218 [2024-12-10 11:30:28.347572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.347762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.347803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.218 [2024-12-10 11:30:28.347820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:22:06.218 [2024-12-10 11:30:28.347838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.218 [2024-12-10 11:30:28.366105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.218 [2024-12-10 11:30:28.366169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.218 [2024-12-10 11:30:28.366193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.228 ms 00:22:06.219 [2024-12-10 11:30:28.366211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.219 [2024-12-10 11:30:28.380059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.477 [2024-12-10 11:30:28.394419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.394495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.477 [2024-12-10 11:30:28.394537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.036 ms 00:22:06.477 [2024-12-10 11:30:28.394551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.477 [2024-12-10 11:30:28.495041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.495124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:06.477 [2024-12-10 11:30:28.495157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.413 ms 00:22:06.477 [2024-12-10 11:30:28.495172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.477 [2024-12-10 11:30:28.495438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.495464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.477 [2024-12-10 11:30:28.495486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:22:06.477 [2024-12-10 11:30:28.495501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.477 [2024-12-10 11:30:28.528339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.528411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:06.477 [2024-12-10 11:30:28.528439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.727 ms 00:22:06.477 [2024-12-10 11:30:28.528454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.477 [2024-12-10 11:30:28.560016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.560088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:06.477 [2024-12-10 11:30:28.560117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.509 ms 00:22:06.477 [2024-12-10 11:30:28.560132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.477 [2024-12-10 11:30:28.560925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.477 [2024-12-10 11:30:28.560967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.477 [2024-12-10 11:30:28.560991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:22:06.477 [2024-12-10 11:30:28.561005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.734 [2024-12-10 11:30:28.663908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.734 [2024-12-10 11:30:28.663979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:06.734 [2024-12-10 11:30:28.664011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.802 ms 00:22:06.734 [2024-12-10 11:30:28.664030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.734 [2024-12-10 11:30:28.696896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.734 [2024-12-10 11:30:28.696951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:06.734 [2024-12-10 11:30:28.696994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.758 ms 00:22:06.734 [2024-12-10 11:30:28.697010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.734 [2024-12-10 11:30:28.729510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.734 [2024-12-10 11:30:28.729584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:06.735 [2024-12-10 11:30:28.729611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.444 ms 00:22:06.735 [2024-12-10 11:30:28.729645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.735 [2024-12-10 11:30:28.762424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.735 [2024-12-10 11:30:28.762509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.735 [2024-12-10 11:30:28.762538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.713 ms 00:22:06.735 [2024-12-10 11:30:28.762553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.735 [2024-12-10 11:30:28.762622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.735 [2024-12-10 11:30:28.762674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.735 [2024-12-10 11:30:28.762703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:06.735 [2024-12-10 11:30:28.762717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.735 [2024-12-10 11:30:28.762900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.735 [2024-12-10 11:30:28.762922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.735 [2024-12-10 11:30:28.762940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:06.735 [2024-12-10 11:30:28.762966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.735 [2024-12-10 11:30:28.764295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3765.673 ms, result 0 00:22:06.735 { 00:22:06.735 "name": "ftl0", 00:22:06.735 "uuid": "12cdde72-4e19-4953-babc-e678b6cc4014" 00:22:06.735 } 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:06.735 11:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:06.993 11:30:29 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:07.251 [ 00:22:07.251 { 00:22:07.251 "name": "ftl0", 00:22:07.251 "aliases": [ 00:22:07.251 "12cdde72-4e19-4953-babc-e678b6cc4014" 00:22:07.251 ], 00:22:07.251 "product_name": "FTL disk", 00:22:07.251 "block_size": 4096, 00:22:07.251 "num_blocks": 20971520, 00:22:07.251 "uuid": "12cdde72-4e19-4953-babc-e678b6cc4014", 00:22:07.251 "assigned_rate_limits": { 00:22:07.251 "rw_ios_per_sec": 0, 00:22:07.251 "rw_mbytes_per_sec": 0, 00:22:07.251 "r_mbytes_per_sec": 0, 00:22:07.251 "w_mbytes_per_sec": 0 00:22:07.251 }, 00:22:07.251 "claimed": false, 00:22:07.251 "zoned": false, 00:22:07.251 "supported_io_types": { 00:22:07.251 "read": true, 00:22:07.251 "write": true, 00:22:07.251 "unmap": true, 00:22:07.251 "flush": true, 00:22:07.251 "reset": false, 00:22:07.251 "nvme_admin": false, 00:22:07.251 "nvme_io": false, 00:22:07.251 "nvme_io_md": false, 00:22:07.251 "write_zeroes": true, 00:22:07.251 "zcopy": false, 00:22:07.251 "get_zone_info": false, 00:22:07.251 "zone_management": false, 00:22:07.251 "zone_append": false, 00:22:07.251 "compare": false, 00:22:07.251 "compare_and_write": false, 00:22:07.251 "abort": false, 00:22:07.251 "seek_hole": false, 00:22:07.251 "seek_data": false, 00:22:07.251 "copy": false, 00:22:07.251 "nvme_iov_md": false 00:22:07.251 }, 00:22:07.251 "driver_specific": { 00:22:07.251 "ftl": { 00:22:07.251 "base_bdev": "b6a45a23-e711-48b2-8486-7691e853a203", 00:22:07.251 "cache": "nvc0n1p0" 00:22:07.251 } 00:22:07.251 } 00:22:07.251 } 00:22:07.251 ] 00:22:07.251 11:30:29 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:22:07.251 11:30:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:07.251 11:30:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:07.818 11:30:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:07.818 11:30:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:07.818 [2024-12-10 11:30:29.945522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.818 [2024-12-10 11:30:29.945597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.818 [2024-12-10 11:30:29.945623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.818 [2024-12-10 11:30:29.945668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.818 [2024-12-10 11:30:29.945721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.818 [2024-12-10 11:30:29.949121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.818 [2024-12-10 11:30:29.949164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.818 [2024-12-10 11:30:29.949188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:22:07.818 [2024-12-10 11:30:29.949203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.818 [2024-12-10 11:30:29.949755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.818 [2024-12-10 11:30:29.949787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.818 [2024-12-10 11:30:29.949808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:22:07.818 [2024-12-10 11:30:29.949821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.818 [2024-12-10 11:30:29.953159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.818 [2024-12-10 11:30:29.953203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.818 [2024-12-10 11:30:29.953225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:22:07.818 [2024-12-10 11:30:29.953239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.818 [2024-12-10 11:30:29.959965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.818 [2024-12-10 11:30:29.960006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:07.818 [2024-12-10 11:30:29.960028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.685 ms 00:22:07.818 [2024-12-10 11:30:29.960042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:29.991824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:29.991887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:08.077 [2024-12-10 11:30:29.991935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.621 ms 00:22:08.077 [2024-12-10 11:30:29.991950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.011852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.011924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:08.077 [2024-12-10 11:30:30.011958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.820 ms 00:22:08.077 [2024-12-10 11:30:30.011983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.012240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.012264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:08.077 [2024-12-10 11:30:30.012283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:08.077 [2024-12-10 11:30:30.012297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.044257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.044327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:08.077 [2024-12-10 11:30:30.044354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.918 ms 00:22:08.077 [2024-12-10 11:30:30.044369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.075839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.075913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:08.077 [2024-12-10 11:30:30.075941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.396 ms 00:22:08.077 [2024-12-10 11:30:30.075956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.107244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.107521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:08.077 [2024-12-10 11:30:30.107564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.204 ms 00:22:08.077 [2024-12-10 11:30:30.107582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.139006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.077 [2024-12-10 11:30:30.139076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:08.077 [2024-12-10 11:30:30.139104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.192 ms 00:22:08.077 [2024-12-10 11:30:30.139118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.077 [2024-12-10 11:30:30.139214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:08.077 [2024-12-10 11:30:30.139247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:08.077 [2024-12-10 11:30:30.139490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.139987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.140999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:08.078 [2024-12-10 11:30:30.141194] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:08.078 [2024-12-10 11:30:30.141230] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12cdde72-4e19-4953-babc-e678b6cc4014 00:22:08.078 [2024-12-10 11:30:30.141246] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:08.078 [2024-12-10 11:30:30.141263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:08.078 [2024-12-10 11:30:30.141277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:08.078 [2024-12-10 11:30:30.141297] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:08.078 [2024-12-10 11:30:30.141318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:08.078 [2024-12-10 11:30:30.141345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:08.078 [2024-12-10 11:30:30.141362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:08.079 [2024-12-10 11:30:30.141376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:08.079 [2024-12-10 11:30:30.141393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:08.079 [2024-12-10 11:30:30.141422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.079 [2024-12-10 11:30:30.141441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:08.079 [2024-12-10 11:30:30.141459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:22:08.079 [2024-12-10 11:30:30.141474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.158455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.079 [2024-12-10 11:30:30.158724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:08.079 [2024-12-10 11:30:30.158766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.873 ms 00:22:08.079 [2024-12-10 11:30:30.158782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.159253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.079 [2024-12-10 11:30:30.159277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:08.079 [2024-12-10 11:30:30.159297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:22:08.079 [2024-12-10 11:30:30.159311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.217839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.079 [2024-12-10 11:30:30.218107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.079 [2024-12-10 11:30:30.218149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.079 [2024-12-10 11:30:30.218166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.218264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.079 [2024-12-10 11:30:30.218282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.079 [2024-12-10 11:30:30.218313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.079 [2024-12-10 11:30:30.218327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.218505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.079 [2024-12-10 11:30:30.218531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.079 [2024-12-10 11:30:30.218548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.079 [2024-12-10 11:30:30.218562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.079 [2024-12-10 11:30:30.218603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.079 [2024-12-10 11:30:30.218619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.079 [2024-12-10 11:30:30.218663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.079 [2024-12-10 11:30:30.218679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.328770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.337 [2024-12-10 11:30:30.329039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.337 [2024-12-10 11:30:30.329080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.337 [2024-12-10 11:30:30.329097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.414525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.337 [2024-12-10 11:30:30.414844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.337 [2024-12-10 11:30:30.414887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.337 [2024-12-10 11:30:30.414904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.415049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.337 [2024-12-10 11:30:30.415071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.337 [2024-12-10 11:30:30.415093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.337 [2024-12-10 11:30:30.415107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.415198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.337 [2024-12-10 11:30:30.415218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.337 [2024-12-10 11:30:30.415235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.337 [2024-12-10 11:30:30.415249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.415403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.337 [2024-12-10 11:30:30.415425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.337 [2024-12-10 11:30:30.415442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.337 [2024-12-10 11:30:30.415473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.337 [2024-12-10 11:30:30.415552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.338 [2024-12-10 11:30:30.415571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.338 [2024-12-10 11:30:30.415588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.338 [2024-12-10 11:30:30.415602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.338 [2024-12-10 11:30:30.415675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.338 [2024-12-10 11:30:30.415695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.338 [2024-12-10 11:30:30.415712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.338 [2024-12-10 11:30:30.415727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.338 [2024-12-10 11:30:30.415824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.338 [2024-12-10 11:30:30.415844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.338 [2024-12-10 11:30:30.415862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.338 [2024-12-10 11:30:30.415876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.338 [2024-12-10 11:30:30.416077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.514 ms, result 0 00:22:08.338 true 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77107 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77107 ']' 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77107 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77107 00:22:08.338 killing process with pid 77107 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77107' 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77107 00:22:08.338 11:30:30 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77107 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:13.632 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:13.633 11:30:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.633 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:13.633 fio-3.35 00:22:13.633 Starting 1 thread 00:22:18.939 00:22:18.939 test: (groupid=0, jobs=1): err= 0: pid=77320: Tue Dec 10 11:30:40 2024 00:22:18.939 read: IOPS=951, BW=63.2MiB/s (66.2MB/s)(255MiB/4030msec) 00:22:18.939 slat (usec): min=5, max=156, avg= 7.26, stdev= 3.67 00:22:18.939 clat (usec): min=326, max=773, avg=467.79, stdev=53.03 00:22:18.939 lat (usec): min=332, max=780, avg=475.05, stdev=53.72 00:22:18.939 clat percentiles (usec): 00:22:18.939 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 412], 20.00th=[ 437], 00:22:18.939 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 469], 00:22:18.939 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 562], 00:22:18.939 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 750], 00:22:18.939 | 99.99th=[ 775] 00:22:18.939 write: IOPS=958, BW=63.6MiB/s (66.7MB/s)(256MiB/4025msec); 0 zone resets 00:22:18.939 slat (nsec): min=20120, max=92457, avg=24185.56, stdev=4729.45 00:22:18.939 clat (usec): min=362, max=2516, avg=534.82, stdev=70.97 00:22:18.939 lat (usec): min=398, max=2540, avg=559.00, stdev=71.27 00:22:18.939 clat percentiles (usec): 00:22:18.939 | 1.00th=[ 408], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 478], 00:22:18.939 | 30.00th=[ 498], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 545], 00:22:18.939 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 635], 00:22:18.939 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 1237], 00:22:18.939 | 99.99th=[ 2507] 00:22:18.939 bw ( KiB/s): min=63512, max=68407, per=100.00%, avg=65160.88, stdev=1423.16, samples=8 00:22:18.939 iops : min= 934, max= 1005, avg=958.12, stdev=20.61, samples=8 00:22:18.939 lat (usec) : 500=52.01%, 750=47.33%, 1000=0.64% 00:22:18.939 lat (msec) : 2=0.01%, 4=0.01% 00:22:18.939 cpu : usr=98.66%, sys=0.37%, ctx=11, majf=0, minf=1167 00:22:18.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:18.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.939 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:18.939 00:22:18.939 Run status group 0 (all jobs): 00:22:18.939 READ: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=255MiB (267MB), run=4030-4030msec 00:22:18.939 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=256MiB (269MB), run=4025-4025msec 00:22:19.872 ----------------------------------------------------- 00:22:19.872 Suppressions used: 00:22:19.872 count bytes template 00:22:19.872 1 5 /usr/src/fio/parse.c 00:22:19.872 1 8 libtcmalloc_minimal.so 00:22:19.872 1 904 libcrypto.so 00:22:19.872 ----------------------------------------------------- 00:22:19.872 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.872 11:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:19.873 11:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.873 11:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:19.873 11:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:19.873 11:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:19.873 11:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:19.873 11:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:20.131 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:20.131 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:20.131 fio-3.35 00:22:20.131 Starting 2 threads 00:22:52.197 00:22:52.197 first_half: (groupid=0, jobs=1): err= 0: pid=77423: Tue Dec 10 11:31:12 2024 00:22:52.197 read: IOPS=2283, BW=9134KiB/s (9353kB/s)(255MiB/28636msec) 00:22:52.197 slat (usec): min=4, max=382, avg= 7.38, stdev= 3.52 00:22:52.197 clat (usec): min=1006, max=318463, avg=42458.08, stdev=22230.38 00:22:52.197 lat (usec): min=1015, max=318472, avg=42465.46, stdev=22230.56 00:22:52.197 clat percentiles (msec): 00:22:52.197 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 39], 00:22:52.197 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:22:52.197 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 52], 00:22:52.197 | 99.00th=[ 176], 99.50th=[ 194], 99.90th=[ 245], 99.95th=[ 266], 00:22:52.197 | 99.99th=[ 313] 00:22:52.197 write: IOPS=2389, BW=9557KiB/s (9787kB/s)(256MiB/27429msec); 0 zone resets 00:22:52.197 slat (usec): min=5, max=1277, avg= 9.26, stdev=11.03 00:22:52.197 clat (usec): min=438, max=114952, avg=13519.89, stdev=22980.48 00:22:52.197 lat (usec): min=453, max=114962, avg=13529.16, stdev=22980.72 00:22:52.197 clat percentiles (usec): 00:22:52.197 | 1.00th=[ 955], 5.00th=[ 1254], 10.00th=[ 1467], 20.00th=[ 1975], 00:22:52.197 | 30.00th=[ 3654], 40.00th=[ 5407], 50.00th=[ 6456], 60.00th=[ 7308], 00:22:52.197 | 70.00th=[ 8717], 80.00th=[ 13042], 90.00th=[ 30278], 95.00th=[ 84411], 00:22:52.197 | 99.00th=[ 98042], 99.50th=[104334], 99.90th=[107480], 99.95th=[110625], 00:22:52.197 | 99.99th=[112722] 00:22:52.197 bw ( KiB/s): min= 120, max=40328, per=88.48%, avg=16912.52, stdev=12056.22, samples=31 00:22:52.197 iops : min= 30, max=10082, avg=4228.13, stdev=3014.05, samples=31 00:22:52.197 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.64% 00:22:52.197 lat (msec) : 2=9.60%, 4=6.27%, 10=21.28%, 20=8.79%, 50=46.74% 00:22:52.197 lat (msec) : 100=4.93%, 250=1.66%, 500=0.04% 00:22:52.197 cpu : usr=98.49%, sys=0.44%, ctx=63, majf=0, minf=5568 00:22:52.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:52.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.197 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.197 issued rwts: total=65389,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.197 second_half: (groupid=0, jobs=1): err= 0: pid=77424: Tue Dec 10 11:31:12 2024 00:22:52.197 read: IOPS=2293, BW=9173KiB/s (9393kB/s)(255MiB/28453msec) 00:22:52.197 slat (nsec): min=4707, max=46441, avg=7220.05, stdev=1648.06 00:22:52.197 clat (usec): min=889, max=332271, avg=43225.20, stdev=21138.27 00:22:52.197 lat (usec): min=897, max=332279, avg=43232.42, stdev=21138.40 00:22:52.197 clat percentiles (msec): 00:22:52.197 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:22:52.197 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:22:52.197 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 53], 00:22:52.197 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 239], 99.95th=[ 257], 00:22:52.197 | 99.99th=[ 317] 00:22:52.197 write: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(256MiB/23400msec); 0 zone resets 00:22:52.197 slat (usec): min=5, max=645, avg= 9.11, stdev= 5.88 00:22:52.197 clat (usec): min=463, max=112396, avg=12509.58, stdev=22407.91 00:22:52.197 lat (usec): min=487, max=112404, avg=12518.69, stdev=22408.01 00:22:52.197 clat percentiles (usec): 00:22:52.197 | 1.00th=[ 1037], 5.00th=[ 1319], 10.00th=[ 1483], 20.00th=[ 1795], 00:22:52.197 | 30.00th=[ 2442], 40.00th=[ 4555], 50.00th=[ 5932], 60.00th=[ 6980], 00:22:52.197 | 70.00th=[ 8225], 80.00th=[ 12518], 90.00th=[ 16188], 95.00th=[ 83362], 00:22:52.197 | 99.00th=[ 96994], 99.50th=[102237], 99.90th=[107480], 99.95th=[108528], 00:22:52.197 | 99.99th=[111674] 00:22:52.197 bw ( KiB/s): min= 216, max=42160, per=100.00%, avg=20974.12, stdev=13452.59, samples=25 00:22:52.197 iops : min= 54, max=10540, avg=5243.52, stdev=3363.14, samples=25 00:22:52.197 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.35% 00:22:52.197 lat (msec) : 2=12.34%, 4=6.15%, 10=19.35%, 20=8.52%, 50=46.40% 00:22:52.197 lat (msec) : 100=5.05%, 250=1.78%, 500=0.03% 00:22:52.197 cpu : usr=99.17%, sys=0.16%, ctx=58, majf=0, minf=5545 00:22:52.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:52.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.197 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.197 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.197 00:22:52.197 Run status group 0 (all jobs): 00:22:52.197 READ: bw=17.8MiB/s (18.7MB/s), 9134KiB/s-9173KiB/s (9353kB/s-9393kB/s), io=510MiB (535MB), run=28453-28636msec 00:22:52.197 WRITE: bw=18.7MiB/s (19.6MB/s), 9557KiB/s-10.9MiB/s (9787kB/s-11.5MB/s), io=512MiB (537MB), run=23400-27429msec 00:22:52.456 ----------------------------------------------------- 00:22:52.456 Suppressions used: 00:22:52.456 count bytes template 00:22:52.456 2 10 /usr/src/fio/parse.c 00:22:52.456 3 288 /usr/src/fio/iolog.c 00:22:52.456 1 8 libtcmalloc_minimal.so 00:22:52.456 1 904 libcrypto.so 00:22:52.456 ----------------------------------------------------- 00:22:52.456 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:52.456 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:52.715 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:52.715 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:52.715 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:52.715 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:52.715 11:31:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:52.715 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:52.715 fio-3.35 00:22:52.715 Starting 1 thread 00:23:10.800 00:23:10.800 test: (groupid=0, jobs=1): err= 0: pid=77783: Tue Dec 10 11:31:32 2024 00:23:10.800 read: IOPS=6335, BW=24.7MiB/s (25.9MB/s)(255MiB/10292msec) 00:23:10.800 slat (nsec): min=4727, max=49560, avg=7031.76, stdev=1788.16 00:23:10.800 clat (usec): min=775, max=39088, avg=20192.39, stdev=1273.73 00:23:10.800 lat (usec): min=780, max=39096, avg=20199.43, stdev=1273.71 00:23:10.800 clat percentiles (usec): 00:23:10.800 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:23:10.800 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:23:10.800 | 70.00th=[20317], 80.00th=[20317], 90.00th=[20841], 95.00th=[22676], 00:23:10.800 | 99.00th=[25035], 99.50th=[27395], 99.90th=[29754], 99.95th=[34341], 00:23:10.800 | 99.99th=[38536] 00:23:10.800 write: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(256MiB/5758msec); 0 zone resets 00:23:10.800 slat (usec): min=6, max=1577, avg= 9.63, stdev= 8.83 00:23:10.800 clat (usec): min=695, max=69836, avg=11178.50, stdev=14303.53 00:23:10.800 lat (usec): min=705, max=69844, avg=11188.13, stdev=14303.58 00:23:10.800 clat percentiles (usec): 00:23:10.800 | 1.00th=[ 979], 5.00th=[ 1188], 10.00th=[ 1319], 20.00th=[ 1516], 00:23:10.800 | 30.00th=[ 1729], 40.00th=[ 2311], 50.00th=[ 6980], 60.00th=[ 8029], 00:23:10.800 | 70.00th=[ 9503], 80.00th=[11600], 90.00th=[41681], 95.00th=[44303], 00:23:10.800 | 99.00th=[48497], 99.50th=[50070], 99.90th=[58983], 99.95th=[61080], 00:23:10.800 | 99.99th=[64750] 00:23:10.800 bw ( KiB/s): min=18440, max=65848, per=95.97%, avg=43690.67, stdev=12704.17, samples=12 00:23:10.800 iops : min= 4610, max=16462, avg=10922.67, stdev=3176.04, samples=12 00:23:10.800 lat (usec) : 750=0.01%, 1000=0.63% 00:23:10.800 lat (msec) : 2=18.07%, 4=2.23%, 10=15.69%, 20=33.47%, 50=29.65% 00:23:10.800 lat (msec) : 100=0.28% 00:23:10.800 cpu : usr=98.87%, sys=0.32%, ctx=29, majf=0, minf=5563 00:23:10.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.800 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:10.800 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:10.800 00:23:10.800 Run status group 0 (all jobs): 00:23:10.800 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10292-10292msec 00:23:10.800 WRITE: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=256MiB (268MB), run=5758-5758msec 00:23:11.741 ----------------------------------------------------- 00:23:11.741 Suppressions used: 00:23:11.741 count bytes template 00:23:11.741 1 5 /usr/src/fio/parse.c 00:23:11.741 2 192 /usr/src/fio/iolog.c 00:23:11.741 1 8 libtcmalloc_minimal.so 00:23:11.741 1 904 libcrypto.so 00:23:11.741 ----------------------------------------------------- 00:23:11.741 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:12.037 Remove shared memory files 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58299 /dev/shm/spdk_tgt_trace.pid76031 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:12.037 ************************************ 00:23:12.037 END TEST ftl_fio_basic 00:23:12.037 ************************************ 00:23:12.037 00:23:12.037 real 1m13.927s 00:23:12.037 user 2m44.401s 00:23:12.037 sys 0m3.715s 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.037 11:31:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:12.037 11:31:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:12.037 11:31:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:12.037 11:31:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.037 11:31:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:12.037 ************************************ 00:23:12.037 START TEST ftl_bdevperf 00:23:12.037 ************************************ 00:23:12.037 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:12.037 * Looking for test storage... 00:23:12.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.037 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:12.037 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:12.037 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.296 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.297 --rc genhtml_branch_coverage=1 00:23:12.297 --rc genhtml_function_coverage=1 00:23:12.297 --rc genhtml_legend=1 00:23:12.297 --rc geninfo_all_blocks=1 00:23:12.297 --rc geninfo_unexecuted_blocks=1 00:23:12.297 00:23:12.297 ' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.297 --rc genhtml_branch_coverage=1 00:23:12.297 --rc genhtml_function_coverage=1 00:23:12.297 --rc genhtml_legend=1 00:23:12.297 --rc geninfo_all_blocks=1 00:23:12.297 --rc geninfo_unexecuted_blocks=1 00:23:12.297 00:23:12.297 ' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.297 --rc genhtml_branch_coverage=1 00:23:12.297 --rc genhtml_function_coverage=1 00:23:12.297 --rc genhtml_legend=1 00:23:12.297 --rc geninfo_all_blocks=1 00:23:12.297 --rc geninfo_unexecuted_blocks=1 00:23:12.297 00:23:12.297 ' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.297 --rc genhtml_branch_coverage=1 00:23:12.297 --rc genhtml_function_coverage=1 00:23:12.297 --rc genhtml_legend=1 00:23:12.297 --rc geninfo_all_blocks=1 00:23:12.297 --rc geninfo_unexecuted_blocks=1 00:23:12.297 00:23:12.297 ' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78042 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78042 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78042 ']' 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.297 11:31:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.297 [2024-12-10 11:31:34.344498] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:12.297 [2024-12-10 11:31:34.345132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78042 ] 00:23:12.556 [2024-12-10 11:31:34.528900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.556 [2024-12-10 11:31:34.657065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:13.492 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:13.750 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:13.751 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:13.751 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:14.009 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:14.009 { 00:23:14.009 "name": "nvme0n1", 00:23:14.009 "aliases": [ 00:23:14.009 "ddb79058-b7c4-47a0-baa0-4d538c5f0f3f" 00:23:14.009 ], 00:23:14.009 "product_name": "NVMe disk", 00:23:14.009 "block_size": 4096, 00:23:14.009 "num_blocks": 1310720, 00:23:14.009 "uuid": "ddb79058-b7c4-47a0-baa0-4d538c5f0f3f", 00:23:14.009 "numa_id": -1, 00:23:14.009 "assigned_rate_limits": { 00:23:14.009 "rw_ios_per_sec": 0, 00:23:14.009 "rw_mbytes_per_sec": 0, 00:23:14.009 "r_mbytes_per_sec": 0, 00:23:14.009 "w_mbytes_per_sec": 0 00:23:14.009 }, 00:23:14.009 "claimed": true, 00:23:14.009 "claim_type": "read_many_write_one", 00:23:14.009 "zoned": false, 00:23:14.009 "supported_io_types": { 00:23:14.009 "read": true, 00:23:14.009 "write": true, 00:23:14.009 "unmap": true, 00:23:14.009 "flush": true, 00:23:14.009 "reset": true, 00:23:14.009 "nvme_admin": true, 00:23:14.009 "nvme_io": true, 00:23:14.009 "nvme_io_md": false, 00:23:14.009 "write_zeroes": true, 00:23:14.009 "zcopy": false, 00:23:14.009 "get_zone_info": false, 00:23:14.009 "zone_management": false, 00:23:14.009 "zone_append": false, 00:23:14.009 "compare": true, 00:23:14.009 "compare_and_write": false, 00:23:14.009 "abort": true, 00:23:14.009 "seek_hole": false, 00:23:14.009 "seek_data": false, 00:23:14.009 "copy": true, 00:23:14.009 "nvme_iov_md": false 00:23:14.009 }, 00:23:14.009 "driver_specific": { 00:23:14.009 "nvme": [ 00:23:14.009 { 00:23:14.009 "pci_address": "0000:00:11.0", 00:23:14.009 "trid": { 00:23:14.009 "trtype": "PCIe", 00:23:14.009 "traddr": "0000:00:11.0" 00:23:14.009 }, 00:23:14.009 "ctrlr_data": { 00:23:14.009 "cntlid": 0, 00:23:14.009 "vendor_id": "0x1b36", 00:23:14.009 "model_number": "QEMU NVMe Ctrl", 00:23:14.009 "serial_number": "12341", 00:23:14.009 "firmware_revision": "8.0.0", 00:23:14.009 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:14.009 "oacs": { 00:23:14.009 "security": 0, 00:23:14.009 "format": 1, 00:23:14.009 "firmware": 0, 00:23:14.010 "ns_manage": 1 00:23:14.010 }, 00:23:14.010 "multi_ctrlr": false, 00:23:14.010 "ana_reporting": false 00:23:14.010 }, 00:23:14.010 "vs": { 00:23:14.010 "nvme_version": "1.4" 00:23:14.010 }, 00:23:14.010 "ns_data": { 00:23:14.010 "id": 1, 00:23:14.010 "can_share": false 00:23:14.010 } 00:23:14.010 } 00:23:14.010 ], 00:23:14.010 "mp_policy": "active_passive" 00:23:14.010 } 00:23:14.010 } 00:23:14.010 ]' 00:23:14.010 11:31:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:14.010 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:14.268 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a359efec-c5e8-43b7-bb94-d3b4d5c070f2 00:23:14.268 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:14.268 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a359efec-c5e8-43b7-bb94-d3b4d5c070f2 00:23:14.527 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:14.786 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=78d86d0c-ec35-45ab-b156-137ac86a5a90 00:23:14.786 11:31:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 78d86d0c-ec35-45ab-b156-137ac86a5a90 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:15.357 { 00:23:15.357 "name": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:15.357 "aliases": [ 00:23:15.357 "lvs/nvme0n1p0" 00:23:15.357 ], 00:23:15.357 "product_name": "Logical Volume", 00:23:15.357 "block_size": 4096, 00:23:15.357 "num_blocks": 26476544, 00:23:15.357 "uuid": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:15.357 "assigned_rate_limits": { 00:23:15.357 "rw_ios_per_sec": 0, 00:23:15.357 "rw_mbytes_per_sec": 0, 00:23:15.357 "r_mbytes_per_sec": 0, 00:23:15.357 "w_mbytes_per_sec": 0 00:23:15.357 }, 00:23:15.357 "claimed": false, 00:23:15.357 "zoned": false, 00:23:15.357 "supported_io_types": { 00:23:15.357 "read": true, 00:23:15.357 "write": true, 00:23:15.357 "unmap": true, 00:23:15.357 "flush": false, 00:23:15.357 "reset": true, 00:23:15.357 "nvme_admin": false, 00:23:15.357 "nvme_io": false, 00:23:15.357 "nvme_io_md": false, 00:23:15.357 "write_zeroes": true, 00:23:15.357 "zcopy": false, 00:23:15.357 "get_zone_info": false, 00:23:15.357 "zone_management": false, 00:23:15.357 "zone_append": false, 00:23:15.357 "compare": false, 00:23:15.357 "compare_and_write": false, 00:23:15.357 "abort": false, 00:23:15.357 "seek_hole": true, 00:23:15.357 "seek_data": true, 00:23:15.357 "copy": false, 00:23:15.357 "nvme_iov_md": false 00:23:15.357 }, 00:23:15.357 "driver_specific": { 00:23:15.357 "lvol": { 00:23:15.357 "lvol_store_uuid": "78d86d0c-ec35-45ab-b156-137ac86a5a90", 00:23:15.357 "base_bdev": "nvme0n1", 00:23:15.357 "thin_provision": true, 00:23:15.357 "num_allocated_clusters": 0, 00:23:15.357 "snapshot": false, 00:23:15.357 "clone": false, 00:23:15.357 "esnap_clone": false 00:23:15.357 } 00:23:15.357 } 00:23:15.357 } 00:23:15.357 ]' 00:23:15.357 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:15.616 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:15.875 11:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:16.133 { 00:23:16.133 "name": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:16.133 "aliases": [ 00:23:16.133 "lvs/nvme0n1p0" 00:23:16.133 ], 00:23:16.133 "product_name": "Logical Volume", 00:23:16.133 "block_size": 4096, 00:23:16.133 "num_blocks": 26476544, 00:23:16.133 "uuid": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:16.133 "assigned_rate_limits": { 00:23:16.133 "rw_ios_per_sec": 0, 00:23:16.133 "rw_mbytes_per_sec": 0, 00:23:16.133 "r_mbytes_per_sec": 0, 00:23:16.133 "w_mbytes_per_sec": 0 00:23:16.133 }, 00:23:16.133 "claimed": false, 00:23:16.133 "zoned": false, 00:23:16.133 "supported_io_types": { 00:23:16.133 "read": true, 00:23:16.133 "write": true, 00:23:16.133 "unmap": true, 00:23:16.133 "flush": false, 00:23:16.133 "reset": true, 00:23:16.133 "nvme_admin": false, 00:23:16.133 "nvme_io": false, 00:23:16.133 "nvme_io_md": false, 00:23:16.133 "write_zeroes": true, 00:23:16.133 "zcopy": false, 00:23:16.133 "get_zone_info": false, 00:23:16.133 "zone_management": false, 00:23:16.133 "zone_append": false, 00:23:16.133 "compare": false, 00:23:16.133 "compare_and_write": false, 00:23:16.133 "abort": false, 00:23:16.133 "seek_hole": true, 00:23:16.133 "seek_data": true, 00:23:16.133 "copy": false, 00:23:16.133 "nvme_iov_md": false 00:23:16.133 }, 00:23:16.133 "driver_specific": { 00:23:16.133 "lvol": { 00:23:16.133 "lvol_store_uuid": "78d86d0c-ec35-45ab-b156-137ac86a5a90", 00:23:16.133 "base_bdev": "nvme0n1", 00:23:16.133 "thin_provision": true, 00:23:16.133 "num_allocated_clusters": 0, 00:23:16.133 "snapshot": false, 00:23:16.133 "clone": false, 00:23:16.133 "esnap_clone": false 00:23:16.133 } 00:23:16.133 } 00:23:16.133 } 00:23:16.133 ]' 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:16.133 11:31:38 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:16.391 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:16.958 { 00:23:16.958 "name": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:16.958 "aliases": [ 00:23:16.958 "lvs/nvme0n1p0" 00:23:16.958 ], 00:23:16.958 "product_name": "Logical Volume", 00:23:16.958 "block_size": 4096, 00:23:16.958 "num_blocks": 26476544, 00:23:16.958 "uuid": "24b1e3b6-a01b-4494-9fbf-56d26c8b8867", 00:23:16.958 "assigned_rate_limits": { 00:23:16.958 "rw_ios_per_sec": 0, 00:23:16.958 "rw_mbytes_per_sec": 0, 00:23:16.958 "r_mbytes_per_sec": 0, 00:23:16.958 "w_mbytes_per_sec": 0 00:23:16.958 }, 00:23:16.958 "claimed": false, 00:23:16.958 "zoned": false, 00:23:16.958 "supported_io_types": { 00:23:16.958 "read": true, 00:23:16.958 "write": true, 00:23:16.958 "unmap": true, 00:23:16.958 "flush": false, 00:23:16.958 "reset": true, 00:23:16.958 "nvme_admin": false, 00:23:16.958 "nvme_io": false, 00:23:16.958 "nvme_io_md": false, 00:23:16.958 "write_zeroes": true, 00:23:16.958 "zcopy": false, 00:23:16.958 "get_zone_info": false, 00:23:16.958 "zone_management": false, 00:23:16.958 "zone_append": false, 00:23:16.958 "compare": false, 00:23:16.958 "compare_and_write": false, 00:23:16.958 "abort": false, 00:23:16.958 "seek_hole": true, 00:23:16.958 "seek_data": true, 00:23:16.958 "copy": false, 00:23:16.958 "nvme_iov_md": false 00:23:16.958 }, 00:23:16.958 "driver_specific": { 00:23:16.958 "lvol": { 00:23:16.958 "lvol_store_uuid": "78d86d0c-ec35-45ab-b156-137ac86a5a90", 00:23:16.958 "base_bdev": "nvme0n1", 00:23:16.958 "thin_provision": true, 00:23:16.958 "num_allocated_clusters": 0, 00:23:16.958 "snapshot": false, 00:23:16.958 "clone": false, 00:23:16.958 "esnap_clone": false 00:23:16.958 } 00:23:16.958 } 00:23:16.958 } 00:23:16.958 ]' 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:16.958 11:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 24b1e3b6-a01b-4494-9fbf-56d26c8b8867 -c nvc0n1p0 --l2p_dram_limit 20 00:23:17.217 [2024-12-10 11:31:39.157093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.157159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:17.217 [2024-12-10 11:31:39.157196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:17.217 [2024-12-10 11:31:39.157210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.157286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.157305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.217 [2024-12-10 11:31:39.157318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:17.217 [2024-12-10 11:31:39.157330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.157355] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:17.217 [2024-12-10 11:31:39.158374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:17.217 [2024-12-10 11:31:39.158562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.158588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.217 [2024-12-10 11:31:39.158603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:23:17.217 [2024-12-10 11:31:39.158617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.158824] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c35387a-d0c8-40ff-bc1d-62acd0c79a2a 00:23:17.217 [2024-12-10 11:31:39.159860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.159885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:17.217 [2024-12-10 11:31:39.159905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:17.217 [2024-12-10 11:31:39.159917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.164355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.164404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.217 [2024-12-10 11:31:39.164442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.375 ms 00:23:17.217 [2024-12-10 11:31:39.164453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.164570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.164589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.217 [2024-12-10 11:31:39.164608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:17.217 [2024-12-10 11:31:39.164619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.164940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.165005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:17.217 [2024-12-10 11:31:39.165149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:17.217 [2024-12-10 11:31:39.165173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.165216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:17.217 [2024-12-10 11:31:39.169785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.169831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.217 [2024-12-10 11:31:39.169870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:23:17.217 [2024-12-10 11:31:39.169889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.169934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.169952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:17.217 [2024-12-10 11:31:39.169964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:17.217 [2024-12-10 11:31:39.169978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.170023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:17.217 [2024-12-10 11:31:39.170209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:17.217 [2024-12-10 11:31:39.170228] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:17.217 [2024-12-10 11:31:39.170245] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:17.217 [2024-12-10 11:31:39.170260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:17.217 [2024-12-10 11:31:39.170274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:17.217 [2024-12-10 11:31:39.170287] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:17.217 [2024-12-10 11:31:39.170300] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:17.217 [2024-12-10 11:31:39.170311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:17.217 [2024-12-10 11:31:39.170328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:17.217 [2024-12-10 11:31:39.170340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.170353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:17.217 [2024-12-10 11:31:39.170366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:23:17.217 [2024-12-10 11:31:39.170379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.217 [2024-12-10 11:31:39.170471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.217 [2024-12-10 11:31:39.170487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:17.217 [2024-12-10 11:31:39.170515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:17.218 [2024-12-10 11:31:39.170530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.218 [2024-12-10 11:31:39.170635] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:17.218 [2024-12-10 11:31:39.170653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:17.218 [2024-12-10 11:31:39.170665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.218 [2024-12-10 11:31:39.170704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:17.218 [2024-12-10 11:31:39.170733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:17.218 [2024-12-10 11:31:39.170756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:17.218 [2024-12-10 11:31:39.170767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.218 [2024-12-10 11:31:39.170790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:17.218 [2024-12-10 11:31:39.170831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:17.218 [2024-12-10 11:31:39.170852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.218 [2024-12-10 11:31:39.170874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:17.218 [2024-12-10 11:31:39.170893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:17.218 [2024-12-10 11:31:39.170916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:17.218 [2024-12-10 11:31:39.170940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:17.218 [2024-12-10 11:31:39.170951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:17.218 [2024-12-10 11:31:39.170974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:17.218 [2024-12-10 11:31:39.170986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.218 [2024-12-10 11:31:39.170997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:17.218 [2024-12-10 11:31:39.171009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.218 [2024-12-10 11:31:39.171032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:17.218 [2024-12-10 11:31:39.171054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.218 [2024-12-10 11:31:39.171076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:17.218 [2024-12-10 11:31:39.171089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.218 [2024-12-10 11:31:39.171114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:17.218 [2024-12-10 11:31:39.171125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.218 [2024-12-10 11:31:39.171149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:17.218 [2024-12-10 11:31:39.171162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:17.218 [2024-12-10 11:31:39.171172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.218 [2024-12-10 11:31:39.171186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:17.218 [2024-12-10 11:31:39.171197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:17.218 [2024-12-10 11:31:39.171209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:17.218 [2024-12-10 11:31:39.171232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:17.218 [2024-12-10 11:31:39.171242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171254] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:17.218 [2024-12-10 11:31:39.171266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:17.218 [2024-12-10 11:31:39.171279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.218 [2024-12-10 11:31:39.171290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.218 [2024-12-10 11:31:39.171306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:17.218 [2024-12-10 11:31:39.171317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:17.218 [2024-12-10 11:31:39.171329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:17.218 [2024-12-10 11:31:39.171340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:17.218 [2024-12-10 11:31:39.171352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:17.218 [2024-12-10 11:31:39.171363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:17.218 [2024-12-10 11:31:39.171378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:17.218 [2024-12-10 11:31:39.171391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:17.218 [2024-12-10 11:31:39.171418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:17.218 [2024-12-10 11:31:39.171431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:17.218 [2024-12-10 11:31:39.171442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:17.218 [2024-12-10 11:31:39.171455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:17.218 [2024-12-10 11:31:39.171466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:17.218 [2024-12-10 11:31:39.171479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:17.218 [2024-12-10 11:31:39.171491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:17.218 [2024-12-10 11:31:39.171508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:17.218 [2024-12-10 11:31:39.171519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:17.218 [2024-12-10 11:31:39.171583] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:17.218 [2024-12-10 11:31:39.171598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:17.218 [2024-12-10 11:31:39.171624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:17.218 [2024-12-10 11:31:39.171653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:17.218 [2024-12-10 11:31:39.171666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:17.218 [2024-12-10 11:31:39.171681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.218 [2024-12-10 11:31:39.171692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:17.218 [2024-12-10 11:31:39.171706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:23:17.218 [2024-12-10 11:31:39.171718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.218 [2024-12-10 11:31:39.171770] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:17.218 [2024-12-10 11:31:39.171792] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:19.116 [2024-12-10 11:31:41.255911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.116 [2024-12-10 11:31:41.256134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:19.117 [2024-12-10 11:31:41.256175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2084.150 ms 00:23:19.117 [2024-12-10 11:31:41.256190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.375 [2024-12-10 11:31:41.289208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.375 [2024-12-10 11:31:41.289270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.375 [2024-12-10 11:31:41.289310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.753 ms 00:23:19.376 [2024-12-10 11:31:41.289322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.289539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.289569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.376 [2024-12-10 11:31:41.289588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:19.376 [2024-12-10 11:31:41.289600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.337621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.337692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.376 [2024-12-10 11:31:41.337737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.959 ms 00:23:19.376 [2024-12-10 11:31:41.337750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.337819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.337835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.376 [2024-12-10 11:31:41.337853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:19.376 [2024-12-10 11:31:41.337864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.338282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.338303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.376 [2024-12-10 11:31:41.338318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:23:19.376 [2024-12-10 11:31:41.338330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.338485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.338502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.376 [2024-12-10 11:31:41.338518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:19.376 [2024-12-10 11:31:41.338532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.355376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.355428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.376 [2024-12-10 11:31:41.355480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.819 ms 00:23:19.376 [2024-12-10 11:31:41.355503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.368977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:19.376 [2024-12-10 11:31:41.374005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.374242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.376 [2024-12-10 11:31:41.374274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.382 ms 00:23:19.376 [2024-12-10 11:31:41.374290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.433510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.433584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:19.376 [2024-12-10 11:31:41.433622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.150 ms 00:23:19.376 [2024-12-10 11:31:41.433638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.433888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.433914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.376 [2024-12-10 11:31:41.433931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:23:19.376 [2024-12-10 11:31:41.433944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.464649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.464885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:19.376 [2024-12-10 11:31:41.464915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.635 ms 00:23:19.376 [2024-12-10 11:31:41.464932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.494755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.494800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:19.376 [2024-12-10 11:31:41.494836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:23:19.376 [2024-12-10 11:31:41.494849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.376 [2024-12-10 11:31:41.495584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.376 [2024-12-10 11:31:41.495613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.376 [2024-12-10 11:31:41.495645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:23:19.376 [2024-12-10 11:31:41.495664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.575667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.575954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:19.634 [2024-12-10 11:31:41.575987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.936 ms 00:23:19.634 [2024-12-10 11:31:41.576002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.607141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.607195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:19.634 [2024-12-10 11:31:41.607231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.035 ms 00:23:19.634 [2024-12-10 11:31:41.607244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.637913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.637977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:19.634 [2024-12-10 11:31:41.637995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.625 ms 00:23:19.634 [2024-12-10 11:31:41.638009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.669050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.669144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.634 [2024-12-10 11:31:41.669163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.982 ms 00:23:19.634 [2024-12-10 11:31:41.669177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.669227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.669252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.634 [2024-12-10 11:31:41.669265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:19.634 [2024-12-10 11:31:41.669278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.634 [2024-12-10 11:31:41.669390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.634 [2024-12-10 11:31:41.669413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.635 [2024-12-10 11:31:41.669425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:19.635 [2024-12-10 11:31:41.669441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.635 [2024-12-10 11:31:41.670453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2512.907 ms, result 0 00:23:19.635 { 00:23:19.635 "name": "ftl0", 00:23:19.635 "uuid": "9c35387a-d0c8-40ff-bc1d-62acd0c79a2a" 00:23:19.635 } 00:23:19.635 11:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:19.635 11:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:19.635 11:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:19.893 11:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:20.152 [2024-12-10 11:31:42.159085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:20.152 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:20.152 Zero copy mechanism will not be used. 00:23:20.152 Running I/O for 4 seconds... 00:23:22.096 1778.00 IOPS, 118.07 MiB/s [2024-12-10T11:31:45.197Z] 1805.50 IOPS, 119.90 MiB/s [2024-12-10T11:31:46.571Z] 1827.67 IOPS, 121.37 MiB/s [2024-12-10T11:31:46.571Z] 1824.75 IOPS, 121.17 MiB/s 00:23:24.404 Latency(us) 00:23:24.404 [2024-12-10T11:31:46.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.404 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:24.404 ftl0 : 4.00 1823.97 121.12 0.00 0.00 573.29 225.28 2517.18 00:23:24.404 [2024-12-10T11:31:46.571Z] =================================================================================================================== 00:23:24.404 [2024-12-10T11:31:46.571Z] Total : 1823.97 121.12 0.00 0.00 573.29 225.28 2517.18 00:23:24.404 [2024-12-10 11:31:46.171525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:24.404 { 00:23:24.404 "results": [ 00:23:24.404 { 00:23:24.404 "job": "ftl0", 00:23:24.404 "core_mask": "0x1", 00:23:24.404 "workload": "randwrite", 00:23:24.405 "status": "finished", 00:23:24.405 "queue_depth": 1, 00:23:24.405 "io_size": 69632, 00:23:24.405 "runtime": 4.002813, 00:23:24.405 "iops": 1823.9672949997914, 00:23:24.405 "mibps": 121.1228281835799, 00:23:24.405 "io_failed": 0, 00:23:24.405 "io_timeout": 0, 00:23:24.405 "avg_latency_us": 573.2852734992716, 00:23:24.405 "min_latency_us": 225.28, 00:23:24.405 "max_latency_us": 2517.1781818181817 00:23:24.405 } 00:23:24.405 ], 00:23:24.405 "core_count": 1 00:23:24.405 } 00:23:24.405 11:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:24.405 [2024-12-10 11:31:46.338585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:24.405 Running I/O for 4 seconds... 00:23:26.275 7887.00 IOPS, 30.81 MiB/s [2024-12-10T11:31:49.377Z] 7498.50 IOPS, 29.29 MiB/s [2024-12-10T11:31:50.750Z] 7186.67 IOPS, 28.07 MiB/s [2024-12-10T11:31:50.750Z] 7030.75 IOPS, 27.46 MiB/s 00:23:28.583 Latency(us) 00:23:28.583 [2024-12-10T11:31:50.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.583 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:28.583 ftl0 : 4.02 7019.74 27.42 0.00 0.00 18178.68 335.13 34078.72 00:23:28.583 [2024-12-10T11:31:50.750Z] =================================================================================================================== 00:23:28.583 [2024-12-10T11:31:50.750Z] Total : 7019.74 27.42 0.00 0.00 18178.68 0.00 34078.72 00:23:28.583 [2024-12-10 11:31:50.374454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:28.583 { 00:23:28.583 "results": [ 00:23:28.583 { 00:23:28.583 "job": "ftl0", 00:23:28.583 "core_mask": "0x1", 00:23:28.583 "workload": "randwrite", 00:23:28.583 "status": "finished", 00:23:28.583 "queue_depth": 128, 00:23:28.583 "io_size": 4096, 00:23:28.583 "runtime": 4.024221, 00:23:28.583 "iops": 7019.743696978869, 00:23:28.583 "mibps": 27.420873816323706, 00:23:28.583 "io_failed": 0, 00:23:28.583 "io_timeout": 0, 00:23:28.583 "avg_latency_us": 18178.68318917162, 00:23:28.583 "min_latency_us": 335.1272727272727, 00:23:28.583 "max_latency_us": 34078.72 00:23:28.583 } 00:23:28.583 ], 00:23:28.583 "core_count": 1 00:23:28.583 } 00:23:28.583 11:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:28.583 [2024-12-10 11:31:50.548604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:28.583 Running I/O for 4 seconds... 00:23:30.495 5861.00 IOPS, 22.89 MiB/s [2024-12-10T11:31:53.597Z] 5821.00 IOPS, 22.74 MiB/s [2024-12-10T11:31:54.972Z] 5804.00 IOPS, 22.67 MiB/s [2024-12-10T11:31:54.972Z] 5834.50 IOPS, 22.79 MiB/s 00:23:32.805 Latency(us) 00:23:32.805 [2024-12-10T11:31:54.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.805 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:32.805 Verification LBA range: start 0x0 length 0x1400000 00:23:32.805 ftl0 : 4.01 5844.99 22.83 0.00 0.00 21819.16 381.67 30027.40 00:23:32.805 [2024-12-10T11:31:54.972Z] =================================================================================================================== 00:23:32.805 [2024-12-10T11:31:54.972Z] Total : 5844.99 22.83 0.00 0.00 21819.16 0.00 30027.40 00:23:32.805 [2024-12-10 11:31:54.582180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:32.805 { 00:23:32.805 "results": [ 00:23:32.805 { 00:23:32.805 "job": "ftl0", 00:23:32.805 "core_mask": "0x1", 00:23:32.805 "workload": "verify", 00:23:32.805 "status": "finished", 00:23:32.805 "verify_range": { 00:23:32.805 "start": 0, 00:23:32.805 "length": 20971520 00:23:32.805 }, 00:23:32.805 "queue_depth": 128, 00:23:32.805 "io_size": 4096, 00:23:32.805 "runtime": 4.014378, 00:23:32.805 "iops": 5844.9901827879685, 00:23:32.805 "mibps": 22.831992901515502, 00:23:32.805 "io_failed": 0, 00:23:32.805 "io_timeout": 0, 00:23:32.805 "avg_latency_us": 21819.163866968356, 00:23:32.805 "min_latency_us": 381.6727272727273, 00:23:32.805 "max_latency_us": 30027.403636363637 00:23:32.805 } 00:23:32.805 ], 00:23:32.805 "core_count": 1 00:23:32.805 } 00:23:32.805 11:31:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:32.805 [2024-12-10 11:31:54.883561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.805 [2024-12-10 11:31:54.883662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.805 [2024-12-10 11:31:54.883703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:32.805 [2024-12-10 11:31:54.883718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.805 [2024-12-10 11:31:54.883754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.805 [2024-12-10 11:31:54.887127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.805 [2024-12-10 11:31:54.887159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.805 [2024-12-10 11:31:54.887193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:23:32.805 [2024-12-10 11:31:54.887205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.805 [2024-12-10 11:31:54.888538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.805 [2024-12-10 11:31:54.888597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.805 [2024-12-10 11:31:54.888620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:23:32.805 [2024-12-10 11:31:54.888654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.064 [2024-12-10 11:31:55.069777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.064 [2024-12-10 11:31:55.069857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:33.064 [2024-12-10 11:31:55.069886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 181.079 ms 00:23:33.064 [2024-12-10 11:31:55.069899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.064 [2024-12-10 11:31:55.076661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.064 [2024-12-10 11:31:55.076711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.064 [2024-12-10 11:31:55.076751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.708 ms 00:23:33.065 [2024-12-10 11:31:55.076766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.108611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.108699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.065 [2024-12-10 11:31:55.108724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.740 ms 00:23:33.065 [2024-12-10 11:31:55.108737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.128217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.128269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.065 [2024-12-10 11:31:55.128308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.416 ms 00:23:33.065 [2024-12-10 11:31:55.128320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.128522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.128545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.065 [2024-12-10 11:31:55.128563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:23:33.065 [2024-12-10 11:31:55.128576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.160447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.160497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.065 [2024-12-10 11:31:55.160534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:23:33.065 [2024-12-10 11:31:55.160547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.191532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.191581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.065 [2024-12-10 11:31:55.191619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.932 ms 00:23:33.065 [2024-12-10 11:31:55.191630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.065 [2024-12-10 11:31:55.222139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.065 [2024-12-10 11:31:55.222184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.065 [2024-12-10 11:31:55.222221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.428 ms 00:23:33.065 [2024-12-10 11:31:55.222232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.324 [2024-12-10 11:31:55.252795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.324 [2024-12-10 11:31:55.252855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.324 [2024-12-10 11:31:55.252880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.448 ms 00:23:33.324 [2024-12-10 11:31:55.252892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.324 [2024-12-10 11:31:55.252941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.324 [2024-12-10 11:31:55.252965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.252981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.252994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.253996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.254007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.254029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.324 [2024-12-10 11:31:55.254041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.325 [2024-12-10 11:31:55.254362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.325 [2024-12-10 11:31:55.254379] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c35387a-d0c8-40ff-bc1d-62acd0c79a2a 00:23:33.325 [2024-12-10 11:31:55.254391] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.325 [2024-12-10 11:31:55.254404] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.325 [2024-12-10 11:31:55.254416] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.325 [2024-12-10 11:31:55.254430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.325 [2024-12-10 11:31:55.254441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.325 [2024-12-10 11:31:55.254454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.325 [2024-12-10 11:31:55.254466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.325 [2024-12-10 11:31:55.254480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.325 [2024-12-10 11:31:55.254490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.325 [2024-12-10 11:31:55.254504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.325 [2024-12-10 11:31:55.254515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.325 [2024-12-10 11:31:55.254530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:23:33.325 [2024-12-10 11:31:55.254541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.270883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.325 [2024-12-10 11:31:55.270926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.325 [2024-12-10 11:31:55.270946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.270 ms 00:23:33.325 [2024-12-10 11:31:55.270958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.271411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.325 [2024-12-10 11:31:55.271432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.325 [2024-12-10 11:31:55.271448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:23:33.325 [2024-12-10 11:31:55.271462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.315861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.325 [2024-12-10 11:31:55.316115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.325 [2024-12-10 11:31:55.316155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.325 [2024-12-10 11:31:55.316170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.316252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.325 [2024-12-10 11:31:55.316266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.325 [2024-12-10 11:31:55.316280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.325 [2024-12-10 11:31:55.316295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.316460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.325 [2024-12-10 11:31:55.316481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.325 [2024-12-10 11:31:55.316496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.325 [2024-12-10 11:31:55.316507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.316533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.325 [2024-12-10 11:31:55.316546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.325 [2024-12-10 11:31:55.316560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.325 [2024-12-10 11:31:55.316570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.325 [2024-12-10 11:31:55.417018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.325 [2024-12-10 11:31:55.417088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.325 [2024-12-10 11:31:55.417130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.325 [2024-12-10 11:31:55.417142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.500499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.500566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.584 [2024-12-10 11:31:55.500606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.500622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.500790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.500812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.584 [2024-12-10 11:31:55.500828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.500840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.500909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.500927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.584 [2024-12-10 11:31:55.500941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.500952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.501089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.501109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.584 [2024-12-10 11:31:55.501127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.501140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.501195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.501213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.584 [2024-12-10 11:31:55.501228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.501239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.501288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.501311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.584 [2024-12-10 11:31:55.501326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.501349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.501407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.584 [2024-12-10 11:31:55.501425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.584 [2024-12-10 11:31:55.501440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.584 [2024-12-10 11:31:55.501452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.584 [2024-12-10 11:31:55.501606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 617.999 ms, result 0 00:23:33.584 true 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78042 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78042 ']' 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78042 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78042 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.584 killing process with pid 78042 00:23:33.584 Received shutdown signal, test time was about 4.000000 seconds 00:23:33.584 00:23:33.584 Latency(us) 00:23:33.584 [2024-12-10T11:31:55.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.584 [2024-12-10T11:31:55.751Z] =================================================================================================================== 00:23:33.584 [2024-12-10T11:31:55.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78042' 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78042 00:23:33.584 11:31:55 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78042 00:23:36.866 Remove shared memory files 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:36.866 ************************************ 00:23:36.866 END TEST ftl_bdevperf 00:23:36.866 ************************************ 00:23:36.866 00:23:36.866 real 0m24.904s 00:23:36.866 user 0m28.919s 00:23:36.866 sys 0m1.062s 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.866 11:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.866 11:31:58 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:36.866 11:31:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:36.866 11:31:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.866 11:31:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:36.866 ************************************ 00:23:36.866 START TEST ftl_trim 00:23:36.866 ************************************ 00:23:36.866 11:31:58 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:37.125 * Looking for test storage... 00:23:37.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.125 11:31:59 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.125 --rc genhtml_branch_coverage=1 00:23:37.125 --rc genhtml_function_coverage=1 00:23:37.125 --rc genhtml_legend=1 00:23:37.125 --rc geninfo_all_blocks=1 00:23:37.125 --rc geninfo_unexecuted_blocks=1 00:23:37.125 00:23:37.125 ' 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.125 --rc genhtml_branch_coverage=1 00:23:37.125 --rc genhtml_function_coverage=1 00:23:37.125 --rc genhtml_legend=1 00:23:37.125 --rc geninfo_all_blocks=1 00:23:37.125 --rc geninfo_unexecuted_blocks=1 00:23:37.125 00:23:37.125 ' 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.125 --rc genhtml_branch_coverage=1 00:23:37.125 --rc genhtml_function_coverage=1 00:23:37.125 --rc genhtml_legend=1 00:23:37.125 --rc geninfo_all_blocks=1 00:23:37.125 --rc geninfo_unexecuted_blocks=1 00:23:37.125 00:23:37.125 ' 00:23:37.125 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.125 --rc genhtml_branch_coverage=1 00:23:37.125 --rc genhtml_function_coverage=1 00:23:37.125 --rc genhtml_legend=1 00:23:37.125 --rc geninfo_all_blocks=1 00:23:37.125 --rc geninfo_unexecuted_blocks=1 00:23:37.125 00:23:37.125 ' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:37.125 11:31:59 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78394 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78394 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78394 ']' 00:23:37.126 11:31:59 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.126 11:31:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:37.384 [2024-12-10 11:31:59.368100] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:37.384 [2024-12-10 11:31:59.368309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78394 ] 00:23:37.642 [2024-12-10 11:31:59.553367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:37.642 [2024-12-10 11:31:59.657598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.642 [2024-12-10 11:31:59.657757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.642 [2024-12-10 11:31:59.657766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.602 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.602 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:38.602 11:32:00 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:38.860 11:32:00 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:38.860 11:32:00 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:38.860 11:32:00 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:38.860 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:38.860 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:38.860 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:38.860 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:38.860 11:32:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:39.119 { 00:23:39.119 "name": "nvme0n1", 00:23:39.119 "aliases": [ 00:23:39.119 "0053b918-486c-4c60-8ce0-aadcb2af1c53" 00:23:39.119 ], 00:23:39.119 "product_name": "NVMe disk", 00:23:39.119 "block_size": 4096, 00:23:39.119 "num_blocks": 1310720, 00:23:39.119 "uuid": "0053b918-486c-4c60-8ce0-aadcb2af1c53", 00:23:39.119 "numa_id": -1, 00:23:39.119 "assigned_rate_limits": { 00:23:39.119 "rw_ios_per_sec": 0, 00:23:39.119 "rw_mbytes_per_sec": 0, 00:23:39.119 "r_mbytes_per_sec": 0, 00:23:39.119 "w_mbytes_per_sec": 0 00:23:39.119 }, 00:23:39.119 "claimed": true, 00:23:39.119 "claim_type": "read_many_write_one", 00:23:39.119 "zoned": false, 00:23:39.119 "supported_io_types": { 00:23:39.119 "read": true, 00:23:39.119 "write": true, 00:23:39.119 "unmap": true, 00:23:39.119 "flush": true, 00:23:39.119 "reset": true, 00:23:39.119 "nvme_admin": true, 00:23:39.119 "nvme_io": true, 00:23:39.119 "nvme_io_md": false, 00:23:39.119 "write_zeroes": true, 00:23:39.119 "zcopy": false, 00:23:39.119 "get_zone_info": false, 00:23:39.119 "zone_management": false, 00:23:39.119 "zone_append": false, 00:23:39.119 "compare": true, 00:23:39.119 "compare_and_write": false, 00:23:39.119 "abort": true, 00:23:39.119 "seek_hole": false, 00:23:39.119 "seek_data": false, 00:23:39.119 "copy": true, 00:23:39.119 "nvme_iov_md": false 00:23:39.119 }, 00:23:39.119 "driver_specific": { 00:23:39.119 "nvme": [ 00:23:39.119 { 00:23:39.119 "pci_address": "0000:00:11.0", 00:23:39.119 "trid": { 00:23:39.119 "trtype": "PCIe", 00:23:39.119 "traddr": "0000:00:11.0" 00:23:39.119 }, 00:23:39.119 "ctrlr_data": { 00:23:39.119 "cntlid": 0, 00:23:39.119 "vendor_id": "0x1b36", 00:23:39.119 "model_number": "QEMU NVMe Ctrl", 00:23:39.119 "serial_number": "12341", 00:23:39.119 "firmware_revision": "8.0.0", 00:23:39.119 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:39.119 "oacs": { 00:23:39.119 "security": 0, 00:23:39.119 "format": 1, 00:23:39.119 "firmware": 0, 00:23:39.119 "ns_manage": 1 00:23:39.119 }, 00:23:39.119 "multi_ctrlr": false, 00:23:39.119 "ana_reporting": false 00:23:39.119 }, 00:23:39.119 "vs": { 00:23:39.119 "nvme_version": "1.4" 00:23:39.119 }, 00:23:39.119 "ns_data": { 00:23:39.119 "id": 1, 00:23:39.119 "can_share": false 00:23:39.119 } 00:23:39.119 } 00:23:39.119 ], 00:23:39.119 "mp_policy": "active_passive" 00:23:39.119 } 00:23:39.119 } 00:23:39.119 ]' 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:39.119 11:32:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:39.119 11:32:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:39.119 11:32:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:39.119 11:32:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:39.119 11:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:39.119 11:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:39.377 11:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=78d86d0c-ec35-45ab-b156-137ac86a5a90 00:23:39.377 11:32:01 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:39.377 11:32:01 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78d86d0c-ec35-45ab-b156-137ac86a5a90 00:23:39.943 11:32:01 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:39.943 11:32:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=dc42cd24-8510-4b91-80b0-68b02e9b8300 00:23:39.943 11:32:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dc42cd24-8510-4b91-80b0-68b02e9b8300 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:40.201 11:32:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.201 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.201 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:40.201 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:40.201 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:40.201 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:40.767 { 00:23:40.767 "name": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:40.767 "aliases": [ 00:23:40.767 "lvs/nvme0n1p0" 00:23:40.767 ], 00:23:40.767 "product_name": "Logical Volume", 00:23:40.767 "block_size": 4096, 00:23:40.767 "num_blocks": 26476544, 00:23:40.767 "uuid": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:40.767 "assigned_rate_limits": { 00:23:40.767 "rw_ios_per_sec": 0, 00:23:40.767 "rw_mbytes_per_sec": 0, 00:23:40.767 "r_mbytes_per_sec": 0, 00:23:40.767 "w_mbytes_per_sec": 0 00:23:40.767 }, 00:23:40.767 "claimed": false, 00:23:40.767 "zoned": false, 00:23:40.767 "supported_io_types": { 00:23:40.767 "read": true, 00:23:40.767 "write": true, 00:23:40.767 "unmap": true, 00:23:40.767 "flush": false, 00:23:40.767 "reset": true, 00:23:40.767 "nvme_admin": false, 00:23:40.767 "nvme_io": false, 00:23:40.767 "nvme_io_md": false, 00:23:40.767 "write_zeroes": true, 00:23:40.767 "zcopy": false, 00:23:40.767 "get_zone_info": false, 00:23:40.767 "zone_management": false, 00:23:40.767 "zone_append": false, 00:23:40.767 "compare": false, 00:23:40.767 "compare_and_write": false, 00:23:40.767 "abort": false, 00:23:40.767 "seek_hole": true, 00:23:40.767 "seek_data": true, 00:23:40.767 "copy": false, 00:23:40.767 "nvme_iov_md": false 00:23:40.767 }, 00:23:40.767 "driver_specific": { 00:23:40.767 "lvol": { 00:23:40.767 "lvol_store_uuid": "dc42cd24-8510-4b91-80b0-68b02e9b8300", 00:23:40.767 "base_bdev": "nvme0n1", 00:23:40.767 "thin_provision": true, 00:23:40.767 "num_allocated_clusters": 0, 00:23:40.767 "snapshot": false, 00:23:40.767 "clone": false, 00:23:40.767 "esnap_clone": false 00:23:40.767 } 00:23:40.767 } 00:23:40.767 } 00:23:40.767 ]' 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:40.767 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:40.767 11:32:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:40.767 11:32:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:40.767 11:32:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:41.025 11:32:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:41.025 11:32:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:41.025 11:32:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:41.025 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:41.025 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:41.025 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:41.025 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:41.025 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:41.283 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:41.283 { 00:23:41.283 "name": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:41.283 "aliases": [ 00:23:41.283 "lvs/nvme0n1p0" 00:23:41.283 ], 00:23:41.283 "product_name": "Logical Volume", 00:23:41.283 "block_size": 4096, 00:23:41.283 "num_blocks": 26476544, 00:23:41.283 "uuid": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:41.283 "assigned_rate_limits": { 00:23:41.283 "rw_ios_per_sec": 0, 00:23:41.283 "rw_mbytes_per_sec": 0, 00:23:41.283 "r_mbytes_per_sec": 0, 00:23:41.284 "w_mbytes_per_sec": 0 00:23:41.284 }, 00:23:41.284 "claimed": false, 00:23:41.284 "zoned": false, 00:23:41.284 "supported_io_types": { 00:23:41.284 "read": true, 00:23:41.284 "write": true, 00:23:41.284 "unmap": true, 00:23:41.284 "flush": false, 00:23:41.284 "reset": true, 00:23:41.284 "nvme_admin": false, 00:23:41.284 "nvme_io": false, 00:23:41.284 "nvme_io_md": false, 00:23:41.284 "write_zeroes": true, 00:23:41.284 "zcopy": false, 00:23:41.284 "get_zone_info": false, 00:23:41.284 "zone_management": false, 00:23:41.284 "zone_append": false, 00:23:41.284 "compare": false, 00:23:41.284 "compare_and_write": false, 00:23:41.284 "abort": false, 00:23:41.284 "seek_hole": true, 00:23:41.284 "seek_data": true, 00:23:41.284 "copy": false, 00:23:41.284 "nvme_iov_md": false 00:23:41.284 }, 00:23:41.284 "driver_specific": { 00:23:41.284 "lvol": { 00:23:41.284 "lvol_store_uuid": "dc42cd24-8510-4b91-80b0-68b02e9b8300", 00:23:41.284 "base_bdev": "nvme0n1", 00:23:41.284 "thin_provision": true, 00:23:41.284 "num_allocated_clusters": 0, 00:23:41.284 "snapshot": false, 00:23:41.284 "clone": false, 00:23:41.284 "esnap_clone": false 00:23:41.284 } 00:23:41.284 } 00:23:41.284 } 00:23:41.284 ]' 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:41.284 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:41.284 11:32:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:41.284 11:32:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:41.542 11:32:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:41.542 11:32:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:41.809 11:32:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:41.809 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:41.809 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:41.809 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:41.809 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:41.809 11:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88ee1751-1558-4dfa-a626-abcf9cea7467 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:42.067 { 00:23:42.067 "name": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:42.067 "aliases": [ 00:23:42.067 "lvs/nvme0n1p0" 00:23:42.067 ], 00:23:42.067 "product_name": "Logical Volume", 00:23:42.067 "block_size": 4096, 00:23:42.067 "num_blocks": 26476544, 00:23:42.067 "uuid": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:42.067 "assigned_rate_limits": { 00:23:42.067 "rw_ios_per_sec": 0, 00:23:42.067 "rw_mbytes_per_sec": 0, 00:23:42.067 "r_mbytes_per_sec": 0, 00:23:42.067 "w_mbytes_per_sec": 0 00:23:42.067 }, 00:23:42.067 "claimed": false, 00:23:42.067 "zoned": false, 00:23:42.067 "supported_io_types": { 00:23:42.067 "read": true, 00:23:42.067 "write": true, 00:23:42.067 "unmap": true, 00:23:42.067 "flush": false, 00:23:42.067 "reset": true, 00:23:42.067 "nvme_admin": false, 00:23:42.067 "nvme_io": false, 00:23:42.067 "nvme_io_md": false, 00:23:42.067 "write_zeroes": true, 00:23:42.067 "zcopy": false, 00:23:42.067 "get_zone_info": false, 00:23:42.067 "zone_management": false, 00:23:42.067 "zone_append": false, 00:23:42.067 "compare": false, 00:23:42.067 "compare_and_write": false, 00:23:42.067 "abort": false, 00:23:42.067 "seek_hole": true, 00:23:42.067 "seek_data": true, 00:23:42.067 "copy": false, 00:23:42.067 "nvme_iov_md": false 00:23:42.067 }, 00:23:42.067 "driver_specific": { 00:23:42.067 "lvol": { 00:23:42.067 "lvol_store_uuid": "dc42cd24-8510-4b91-80b0-68b02e9b8300", 00:23:42.067 "base_bdev": "nvme0n1", 00:23:42.067 "thin_provision": true, 00:23:42.067 "num_allocated_clusters": 0, 00:23:42.067 "snapshot": false, 00:23:42.067 "clone": false, 00:23:42.067 "esnap_clone": false 00:23:42.067 } 00:23:42.067 } 00:23:42.067 } 00:23:42.067 ]' 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:42.067 11:32:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:42.067 11:32:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:42.067 11:32:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88ee1751-1558-4dfa-a626-abcf9cea7467 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:42.328 [2024-12-10 11:32:04.451419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.451488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:42.328 [2024-12-10 11:32:04.451513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:42.328 [2024-12-10 11:32:04.451527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.455020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.455068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.328 [2024-12-10 11:32:04.455090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:23:42.328 [2024-12-10 11:32:04.455103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.455252] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:42.328 [2024-12-10 11:32:04.456222] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:42.328 [2024-12-10 11:32:04.456272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.456289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.328 [2024-12-10 11:32:04.456304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:23:42.328 [2024-12-10 11:32:04.456315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.456550] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b10030af-2e86-48c7-be0f-b009016a690f 00:23:42.328 [2024-12-10 11:32:04.457666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.457711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:42.328 [2024-12-10 11:32:04.457729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:42.328 [2024-12-10 11:32:04.457744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.462579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.462655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.328 [2024-12-10 11:32:04.462674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:23:42.328 [2024-12-10 11:32:04.462693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.462875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.462900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.328 [2024-12-10 11:32:04.462915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:42.328 [2024-12-10 11:32:04.462934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.462985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.463003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:42.328 [2024-12-10 11:32:04.463018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:42.328 [2024-12-10 11:32:04.463031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.463075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:42.328 [2024-12-10 11:32:04.467657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.467698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.328 [2024-12-10 11:32:04.467719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:23:42.328 [2024-12-10 11:32:04.467731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.467865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.467904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:42.328 [2024-12-10 11:32:04.467921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:42.328 [2024-12-10 11:32:04.467934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.467973] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:42.328 [2024-12-10 11:32:04.468150] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:42.328 [2024-12-10 11:32:04.468180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:42.328 [2024-12-10 11:32:04.468198] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:42.328 [2024-12-10 11:32:04.468216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468230] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468247] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:42.328 [2024-12-10 11:32:04.468259] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:42.328 [2024-12-10 11:32:04.468275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:42.328 [2024-12-10 11:32:04.468287] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:42.328 [2024-12-10 11:32:04.468302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.468313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:42.328 [2024-12-10 11:32:04.468327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:23:42.328 [2024-12-10 11:32:04.468339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.468449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.328 [2024-12-10 11:32:04.468463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:42.328 [2024-12-10 11:32:04.468477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:42.328 [2024-12-10 11:32:04.468489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.328 [2024-12-10 11:32:04.468656] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:42.328 [2024-12-10 11:32:04.468678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:42.328 [2024-12-10 11:32:04.468694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:42.328 [2024-12-10 11:32:04.468732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:42.328 [2024-12-10 11:32:04.468773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.328 [2024-12-10 11:32:04.468798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:42.328 [2024-12-10 11:32:04.468810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:42.328 [2024-12-10 11:32:04.468823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.328 [2024-12-10 11:32:04.468834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:42.328 [2024-12-10 11:32:04.468848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:42.328 [2024-12-10 11:32:04.468859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:42.328 [2024-12-10 11:32:04.468887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:42.328 [2024-12-10 11:32:04.468925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:42.328 [2024-12-10 11:32:04.468936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.328 [2024-12-10 11:32:04.468949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:42.328 [2024-12-10 11:32:04.468961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:42.329 [2024-12-10 11:32:04.468974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.329 [2024-12-10 11:32:04.468985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:42.329 [2024-12-10 11:32:04.468998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.329 [2024-12-10 11:32:04.469022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:42.329 [2024-12-10 11:32:04.469033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.329 [2024-12-10 11:32:04.469058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:42.329 [2024-12-10 11:32:04.469073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.329 [2024-12-10 11:32:04.469099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:42.329 [2024-12-10 11:32:04.469111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:42.329 [2024-12-10 11:32:04.469123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.329 [2024-12-10 11:32:04.469135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:42.329 [2024-12-10 11:32:04.469148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:42.329 [2024-12-10 11:32:04.469159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:42.329 [2024-12-10 11:32:04.469183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:42.329 [2024-12-10 11:32:04.469196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469207] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:42.329 [2024-12-10 11:32:04.469221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:42.329 [2024-12-10 11:32:04.469232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.329 [2024-12-10 11:32:04.469245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.329 [2024-12-10 11:32:04.469257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:42.329 [2024-12-10 11:32:04.469274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:42.329 [2024-12-10 11:32:04.469285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:42.329 [2024-12-10 11:32:04.469299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:42.329 [2024-12-10 11:32:04.469310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:42.329 [2024-12-10 11:32:04.469323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:42.329 [2024-12-10 11:32:04.469337] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:42.329 [2024-12-10 11:32:04.469357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:42.329 [2024-12-10 11:32:04.469386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:42.329 [2024-12-10 11:32:04.469398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:42.329 [2024-12-10 11:32:04.469413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:42.329 [2024-12-10 11:32:04.469425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:42.329 [2024-12-10 11:32:04.469438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:42.329 [2024-12-10 11:32:04.469450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:42.329 [2024-12-10 11:32:04.469463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:42.329 [2024-12-10 11:32:04.469475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:42.329 [2024-12-10 11:32:04.469490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:42.329 [2024-12-10 11:32:04.469553] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:42.329 [2024-12-10 11:32:04.469567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:42.329 [2024-12-10 11:32:04.469594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:42.329 [2024-12-10 11:32:04.469605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:42.329 [2024-12-10 11:32:04.469619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:42.329 [2024-12-10 11:32:04.469644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.329 [2024-12-10 11:32:04.469660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:42.329 [2024-12-10 11:32:04.469673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:23:42.329 [2024-12-10 11:32:04.469686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.329 [2024-12-10 11:32:04.469781] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:42.329 [2024-12-10 11:32:04.469804] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:44.227 [2024-12-10 11:32:06.372890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.227 [2024-12-10 11:32:06.372995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:44.227 [2024-12-10 11:32:06.373019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1903.119 ms 00:23:44.227 [2024-12-10 11:32:06.373035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.405911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.405994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:44.485 [2024-12-10 11:32:06.406016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.554 ms 00:23:44.485 [2024-12-10 11:32:06.406031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.406224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.406248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:44.485 [2024-12-10 11:32:06.406289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:44.485 [2024-12-10 11:32:06.406312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.458354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.458620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:44.485 [2024-12-10 11:32:06.458664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.002 ms 00:23:44.485 [2024-12-10 11:32:06.458685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.458862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.458888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:44.485 [2024-12-10 11:32:06.458903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:44.485 [2024-12-10 11:32:06.458917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.459247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.459269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:44.485 [2024-12-10 11:32:06.459283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:44.485 [2024-12-10 11:32:06.459297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.459451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.459470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:44.485 [2024-12-10 11:32:06.459505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:44.485 [2024-12-10 11:32:06.459523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.477529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.477810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:44.485 [2024-12-10 11:32:06.477846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.963 ms 00:23:44.485 [2024-12-10 11:32:06.477863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.491402] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:44.485 [2024-12-10 11:32:06.505538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.505606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.485 [2024-12-10 11:32:06.505650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.494 ms 00:23:44.485 [2024-12-10 11:32:06.505667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.566549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.566620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:44.485 [2024-12-10 11:32:06.566661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.739 ms 00:23:44.485 [2024-12-10 11:32:06.566676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.567005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.567036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.485 [2024-12-10 11:32:06.567057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:23:44.485 [2024-12-10 11:32:06.567069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.598514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.598565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:44.485 [2024-12-10 11:32:06.598588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.400 ms 00:23:44.485 [2024-12-10 11:32:06.598605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.630082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.630131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:44.485 [2024-12-10 11:32:06.630155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.198 ms 00:23:44.485 [2024-12-10 11:32:06.630167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.485 [2024-12-10 11:32:06.630976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.485 [2024-12-10 11:32:06.631011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:44.485 [2024-12-10 11:32:06.631031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:23:44.485 [2024-12-10 11:32:06.631044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.720466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.720535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:44.744 [2024-12-10 11:32:06.720563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.372 ms 00:23:44.744 [2024-12-10 11:32:06.720576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.754244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.754308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:44.744 [2024-12-10 11:32:06.754333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.488 ms 00:23:44.744 [2024-12-10 11:32:06.754349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.786819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.786877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:44.744 [2024-12-10 11:32:06.786916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.348 ms 00:23:44.744 [2024-12-10 11:32:06.786928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.818974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.819043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.744 [2024-12-10 11:32:06.819083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.893 ms 00:23:44.744 [2024-12-10 11:32:06.819095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.819212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.819233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.744 [2024-12-10 11:32:06.819252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:44.744 [2024-12-10 11:32:06.819264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.819359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.744 [2024-12-10 11:32:06.819375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.744 [2024-12-10 11:32:06.819390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:44.744 [2024-12-10 11:32:06.819401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.744 [2024-12-10 11:32:06.820427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.744 [2024-12-10 11:32:06.824823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2368.702 ms, result 0 00:23:44.744 [2024-12-10 11:32:06.825839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:44.744 { 00:23:44.744 "name": "ftl0", 00:23:44.744 "uuid": "b10030af-2e86-48c7-be0f-b009016a690f" 00:23:44.744 } 00:23:44.744 11:32:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:44.744 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:45.041 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:45.299 [ 00:23:45.299 { 00:23:45.299 "name": "ftl0", 00:23:45.299 "aliases": [ 00:23:45.299 "b10030af-2e86-48c7-be0f-b009016a690f" 00:23:45.299 ], 00:23:45.299 "product_name": "FTL disk", 00:23:45.299 "block_size": 4096, 00:23:45.299 "num_blocks": 23592960, 00:23:45.300 "uuid": "b10030af-2e86-48c7-be0f-b009016a690f", 00:23:45.300 "assigned_rate_limits": { 00:23:45.300 "rw_ios_per_sec": 0, 00:23:45.300 "rw_mbytes_per_sec": 0, 00:23:45.300 "r_mbytes_per_sec": 0, 00:23:45.300 "w_mbytes_per_sec": 0 00:23:45.300 }, 00:23:45.300 "claimed": false, 00:23:45.300 "zoned": false, 00:23:45.300 "supported_io_types": { 00:23:45.300 "read": true, 00:23:45.300 "write": true, 00:23:45.300 "unmap": true, 00:23:45.300 "flush": true, 00:23:45.300 "reset": false, 00:23:45.300 "nvme_admin": false, 00:23:45.300 "nvme_io": false, 00:23:45.300 "nvme_io_md": false, 00:23:45.300 "write_zeroes": true, 00:23:45.300 "zcopy": false, 00:23:45.300 "get_zone_info": false, 00:23:45.300 "zone_management": false, 00:23:45.300 "zone_append": false, 00:23:45.300 "compare": false, 00:23:45.300 "compare_and_write": false, 00:23:45.300 "abort": false, 00:23:45.300 "seek_hole": false, 00:23:45.300 "seek_data": false, 00:23:45.300 "copy": false, 00:23:45.300 "nvme_iov_md": false 00:23:45.300 }, 00:23:45.300 "driver_specific": { 00:23:45.300 "ftl": { 00:23:45.300 "base_bdev": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:45.300 "cache": "nvc0n1p0" 00:23:45.300 } 00:23:45.300 } 00:23:45.300 } 00:23:45.300 ] 00:23:45.300 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:45.300 11:32:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:45.300 11:32:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:45.863 11:32:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:45.863 11:32:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:46.119 11:32:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:46.119 { 00:23:46.119 "name": "ftl0", 00:23:46.119 "aliases": [ 00:23:46.119 "b10030af-2e86-48c7-be0f-b009016a690f" 00:23:46.119 ], 00:23:46.119 "product_name": "FTL disk", 00:23:46.119 "block_size": 4096, 00:23:46.119 "num_blocks": 23592960, 00:23:46.120 "uuid": "b10030af-2e86-48c7-be0f-b009016a690f", 00:23:46.120 "assigned_rate_limits": { 00:23:46.120 "rw_ios_per_sec": 0, 00:23:46.120 "rw_mbytes_per_sec": 0, 00:23:46.120 "r_mbytes_per_sec": 0, 00:23:46.120 "w_mbytes_per_sec": 0 00:23:46.120 }, 00:23:46.120 "claimed": false, 00:23:46.120 "zoned": false, 00:23:46.120 "supported_io_types": { 00:23:46.120 "read": true, 00:23:46.120 "write": true, 00:23:46.120 "unmap": true, 00:23:46.120 "flush": true, 00:23:46.120 "reset": false, 00:23:46.120 "nvme_admin": false, 00:23:46.120 "nvme_io": false, 00:23:46.120 "nvme_io_md": false, 00:23:46.120 "write_zeroes": true, 00:23:46.120 "zcopy": false, 00:23:46.120 "get_zone_info": false, 00:23:46.120 "zone_management": false, 00:23:46.120 "zone_append": false, 00:23:46.120 "compare": false, 00:23:46.120 "compare_and_write": false, 00:23:46.120 "abort": false, 00:23:46.120 "seek_hole": false, 00:23:46.120 "seek_data": false, 00:23:46.120 "copy": false, 00:23:46.120 "nvme_iov_md": false 00:23:46.120 }, 00:23:46.120 "driver_specific": { 00:23:46.120 "ftl": { 00:23:46.120 "base_bdev": "88ee1751-1558-4dfa-a626-abcf9cea7467", 00:23:46.120 "cache": "nvc0n1p0" 00:23:46.120 } 00:23:46.120 } 00:23:46.120 } 00:23:46.120 ]' 00:23:46.120 11:32:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:46.120 11:32:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:46.120 11:32:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:46.378 [2024-12-10 11:32:08.338283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.378 [2024-12-10 11:32:08.338361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.378 [2024-12-10 11:32:08.338402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:46.378 [2024-12-10 11:32:08.338417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.378 [2024-12-10 11:32:08.338461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:46.378 [2024-12-10 11:32:08.341839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.378 [2024-12-10 11:32:08.341874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.378 [2024-12-10 11:32:08.341899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:23:46.378 [2024-12-10 11:32:08.341911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.378 [2024-12-10 11:32:08.342507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.378 [2024-12-10 11:32:08.342533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.379 [2024-12-10 11:32:08.342551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:23:46.379 [2024-12-10 11:32:08.342562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.346434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.346466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.379 [2024-12-10 11:32:08.346501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.827 ms 00:23:46.379 [2024-12-10 11:32:08.346513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.354223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.354318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.379 [2024-12-10 11:32:08.354361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.648 ms 00:23:46.379 [2024-12-10 11:32:08.354374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.387231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.387473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.379 [2024-12-10 11:32:08.387516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.690 ms 00:23:46.379 [2024-12-10 11:32:08.387531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.406696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.406749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:46.379 [2024-12-10 11:32:08.406794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.000 ms 00:23:46.379 [2024-12-10 11:32:08.406807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.407066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.407089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.379 [2024-12-10 11:32:08.407106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:23:46.379 [2024-12-10 11:32:08.407118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.438965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.439012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:46.379 [2024-12-10 11:32:08.439051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.809 ms 00:23:46.379 [2024-12-10 11:32:08.439063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.470464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.470512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:46.379 [2024-12-10 11:32:08.470552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.296 ms 00:23:46.379 [2024-12-10 11:32:08.470565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.501458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.501518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:46.379 [2024-12-10 11:32:08.501557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.758 ms 00:23:46.379 [2024-12-10 11:32:08.501570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.532513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-10 11:32:08.532558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:46.379 [2024-12-10 11:32:08.532597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.765 ms 00:23:46.379 [2024-12-10 11:32:08.532609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-10 11:32:08.532726] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:46.379 [2024-12-10 11:32:08.532755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.532999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:46.379 [2024-12-10 11:32:08.533580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.533990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:46.380 [2024-12-10 11:32:08.534158] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:46.380 [2024-12-10 11:32:08.534174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:23:46.380 [2024-12-10 11:32:08.534187] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:46.380 [2024-12-10 11:32:08.534200] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:46.380 [2024-12-10 11:32:08.534214] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:46.380 [2024-12-10 11:32:08.534228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:46.380 [2024-12-10 11:32:08.534239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:46.380 [2024-12-10 11:32:08.534252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:46.380 [2024-12-10 11:32:08.534264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:46.380 [2024-12-10 11:32:08.534276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:46.380 [2024-12-10 11:32:08.534287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:46.380 [2024-12-10 11:32:08.534301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.380 [2024-12-10 11:32:08.534312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:46.380 [2024-12-10 11:32:08.534327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:23:46.380 [2024-12-10 11:32:08.534339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-10 11:32:08.551062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-10 11:32:08.551248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:46.639 [2024-12-10 11:32:08.551287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.683 ms 00:23:46.639 [2024-12-10 11:32:08.551301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.551825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.639 [2024-12-10 11:32:08.551856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:46.639 [2024-12-10 11:32:08.551875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:23:46.639 [2024-12-10 11:32:08.551887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.609917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.639 [2024-12-10 11:32:08.609990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.639 [2024-12-10 11:32:08.610031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.639 [2024-12-10 11:32:08.610044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.610235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.639 [2024-12-10 11:32:08.610255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.639 [2024-12-10 11:32:08.610271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.639 [2024-12-10 11:32:08.610283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.610374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.639 [2024-12-10 11:32:08.610397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.639 [2024-12-10 11:32:08.610416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.639 [2024-12-10 11:32:08.610428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.610465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.639 [2024-12-10 11:32:08.610480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.639 [2024-12-10 11:32:08.610494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.639 [2024-12-10 11:32:08.610506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-10 11:32:08.721201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.639 [2024-12-10 11:32:08.721272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.639 [2024-12-10 11:32:08.721296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.639 [2024-12-10 11:32:08.721309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.806699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.806954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.897 [2024-12-10 11:32:08.806992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.897 [2024-12-10 11:32:08.807200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.897 [2024-12-10 11:32:08.807305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.897 [2024-12-10 11:32:08.807520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:46.897 [2024-12-10 11:32:08.807683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.897 [2024-12-10 11:32:08.807796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.807891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.897 [2024-12-10 11:32:08.807910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.897 [2024-12-10 11:32:08.807926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.897 [2024-12-10 11:32:08.807937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.897 [2024-12-10 11:32:08.808166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.848 ms, result 0 00:23:46.897 true 00:23:46.897 11:32:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78394 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78394 ']' 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78394 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78394 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.897 killing process with pid 78394 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78394' 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78394 00:23:46.897 11:32:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78394 00:23:52.161 11:32:13 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:52.420 65536+0 records in 00:23:52.420 65536+0 records out 00:23:52.420 268435456 bytes (268 MB, 256 MiB) copied, 1.20508 s, 223 MB/s 00:23:52.420 11:32:14 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:52.678 [2024-12-10 11:32:14.649940] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:52.678 [2024-12-10 11:32:14.650510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78589 ] 00:23:52.678 [2024-12-10 11:32:14.828560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.968 [2024-12-10 11:32:14.953692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.252 [2024-12-10 11:32:15.275059] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.252 [2024-12-10 11:32:15.275165] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.512 [2024-12-10 11:32:15.439058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.439113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:53.512 [2024-12-10 11:32:15.439148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:53.512 [2024-12-10 11:32:15.439159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.442642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.442699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.512 [2024-12-10 11:32:15.442733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:23:53.512 [2024-12-10 11:32:15.442744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.442964] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:53.512 [2024-12-10 11:32:15.443941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:53.512 [2024-12-10 11:32:15.443983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.443998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.512 [2024-12-10 11:32:15.444010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:23:53.512 [2024-12-10 11:32:15.444021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.445274] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:53.512 [2024-12-10 11:32:15.461316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.461376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:53.512 [2024-12-10 11:32:15.461411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.043 ms 00:23:53.512 [2024-12-10 11:32:15.461423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.461590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.461613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:53.512 [2024-12-10 11:32:15.461646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:53.512 [2024-12-10 11:32:15.461662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.465993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.466052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.512 [2024-12-10 11:32:15.466084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.266 ms 00:23:53.512 [2024-12-10 11:32:15.466095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.466243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.466264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.512 [2024-12-10 11:32:15.466277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:53.512 [2024-12-10 11:32:15.466287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.466333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.466348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:53.512 [2024-12-10 11:32:15.466359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:53.512 [2024-12-10 11:32:15.466370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.466398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:53.512 [2024-12-10 11:32:15.470621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.470684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.512 [2024-12-10 11:32:15.470716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.231 ms 00:23:53.512 [2024-12-10 11:32:15.470727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.470804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.470824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:53.512 [2024-12-10 11:32:15.470836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:53.512 [2024-12-10 11:32:15.470847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.512 [2024-12-10 11:32:15.470892] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:53.512 [2024-12-10 11:32:15.470920] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:53.512 [2024-12-10 11:32:15.470963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:53.512 [2024-12-10 11:32:15.470982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:53.512 [2024-12-10 11:32:15.471093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:53.512 [2024-12-10 11:32:15.471120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:53.512 [2024-12-10 11:32:15.471135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:53.512 [2024-12-10 11:32:15.471154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:53.512 [2024-12-10 11:32:15.471168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:53.512 [2024-12-10 11:32:15.471180] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:53.512 [2024-12-10 11:32:15.471190] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:53.512 [2024-12-10 11:32:15.471201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:53.512 [2024-12-10 11:32:15.471211] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:53.512 [2024-12-10 11:32:15.471222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.512 [2024-12-10 11:32:15.471234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:53.512 [2024-12-10 11:32:15.471245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:23:53.512 [2024-12-10 11:32:15.471256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.471357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.471376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:53.513 [2024-12-10 11:32:15.471388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:53.513 [2024-12-10 11:32:15.471399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.471511] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:53.513 [2024-12-10 11:32:15.471534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:53.513 [2024-12-10 11:32:15.471546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:53.513 [2024-12-10 11:32:15.471579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:53.513 [2024-12-10 11:32:15.471609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.513 [2024-12-10 11:32:15.471650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:53.513 [2024-12-10 11:32:15.471678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:53.513 [2024-12-10 11:32:15.471688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.513 [2024-12-10 11:32:15.471699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:53.513 [2024-12-10 11:32:15.471710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:53.513 [2024-12-10 11:32:15.471721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:53.513 [2024-12-10 11:32:15.471742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:53.513 [2024-12-10 11:32:15.471774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:53.513 [2024-12-10 11:32:15.471804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:53.513 [2024-12-10 11:32:15.471835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:53.513 [2024-12-10 11:32:15.471879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.513 [2024-12-10 11:32:15.471898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:53.513 [2024-12-10 11:32:15.471908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.513 [2024-12-10 11:32:15.471928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:53.513 [2024-12-10 11:32:15.471938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:53.513 [2024-12-10 11:32:15.471948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.513 [2024-12-10 11:32:15.471958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:53.513 [2024-12-10 11:32:15.471967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:53.513 [2024-12-10 11:32:15.471977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.471987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:53.513 [2024-12-10 11:32:15.471997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:53.513 [2024-12-10 11:32:15.472007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.472017] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:53.513 [2024-12-10 11:32:15.472028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:53.513 [2024-12-10 11:32:15.472043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.513 [2024-12-10 11:32:15.472053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.513 [2024-12-10 11:32:15.472066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:53.513 [2024-12-10 11:32:15.472077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:53.513 [2024-12-10 11:32:15.472087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:53.513 [2024-12-10 11:32:15.472098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:53.513 [2024-12-10 11:32:15.472108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:53.513 [2024-12-10 11:32:15.472118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:53.513 [2024-12-10 11:32:15.472130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:53.513 [2024-12-10 11:32:15.472143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:53.513 [2024-12-10 11:32:15.472167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:53.513 [2024-12-10 11:32:15.472178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:53.513 [2024-12-10 11:32:15.472189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:53.513 [2024-12-10 11:32:15.472199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:53.513 [2024-12-10 11:32:15.472210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:53.513 [2024-12-10 11:32:15.472221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:53.513 [2024-12-10 11:32:15.472232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:53.513 [2024-12-10 11:32:15.472243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:53.513 [2024-12-10 11:32:15.472254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:53.513 [2024-12-10 11:32:15.472308] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:53.513 [2024-12-10 11:32:15.472321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:53.513 [2024-12-10 11:32:15.472344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:53.513 [2024-12-10 11:32:15.472355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:53.513 [2024-12-10 11:32:15.472366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:53.513 [2024-12-10 11:32:15.472378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.472395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:53.513 [2024-12-10 11:32:15.472407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:23:53.513 [2024-12-10 11:32:15.472417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.505845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.505906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.513 [2024-12-10 11:32:15.505957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.327 ms 00:23:53.513 [2024-12-10 11:32:15.505975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.506177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.506197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:53.513 [2024-12-10 11:32:15.506210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:53.513 [2024-12-10 11:32:15.506220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.562324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.562395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.513 [2024-12-10 11:32:15.562417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.071 ms 00:23:53.513 [2024-12-10 11:32:15.562429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.562607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.562627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.513 [2024-12-10 11:32:15.562679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:53.513 [2024-12-10 11:32:15.562694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.513 [2024-12-10 11:32:15.563024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.513 [2024-12-10 11:32:15.563043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.514 [2024-12-10 11:32:15.563061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:53.514 [2024-12-10 11:32:15.563072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.563236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.563261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.514 [2024-12-10 11:32:15.563273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:53.514 [2024-12-10 11:32:15.563284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.580726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.580775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.514 [2024-12-10 11:32:15.580810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.411 ms 00:23:53.514 [2024-12-10 11:32:15.580822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.597096] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:53.514 [2024-12-10 11:32:15.597142] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:53.514 [2024-12-10 11:32:15.597162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.597174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:53.514 [2024-12-10 11:32:15.597187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.165 ms 00:23:53.514 [2024-12-10 11:32:15.597198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.626666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.626866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:53.514 [2024-12-10 11:32:15.626895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.368 ms 00:23:53.514 [2024-12-10 11:32:15.626910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.642678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.642718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:53.514 [2024-12-10 11:32:15.642750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.667 ms 00:23:53.514 [2024-12-10 11:32:15.642761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.658127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.658168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:53.514 [2024-12-10 11:32:15.658199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.276 ms 00:23:53.514 [2024-12-10 11:32:15.658209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.514 [2024-12-10 11:32:15.659037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.514 [2024-12-10 11:32:15.659066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:53.514 [2024-12-10 11:32:15.659080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:23:53.514 [2024-12-10 11:32:15.659091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.731314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.731599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:53.773 [2024-12-10 11:32:15.731649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.188 ms 00:23:53.773 [2024-12-10 11:32:15.731666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.744193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:53.773 [2024-12-10 11:32:15.757817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.757886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:53.773 [2024-12-10 11:32:15.757921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.999 ms 00:23:53.773 [2024-12-10 11:32:15.757943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.758084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.758104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:53.773 [2024-12-10 11:32:15.758118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:53.773 [2024-12-10 11:32:15.758129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.758193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.758209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:53.773 [2024-12-10 11:32:15.758221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:53.773 [2024-12-10 11:32:15.758237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.758283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.758301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:53.773 [2024-12-10 11:32:15.758312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:53.773 [2024-12-10 11:32:15.758323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.758363] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:53.773 [2024-12-10 11:32:15.758380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.758391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:53.773 [2024-12-10 11:32:15.758402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:53.773 [2024-12-10 11:32:15.758412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.789349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.789393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:53.773 [2024-12-10 11:32:15.789427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.906 ms 00:23:53.773 [2024-12-10 11:32:15.789439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.789565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.773 [2024-12-10 11:32:15.789586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:53.773 [2024-12-10 11:32:15.789598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:53.773 [2024-12-10 11:32:15.789610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.773 [2024-12-10 11:32:15.790667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:53.773 [2024-12-10 11:32:15.794757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.260 ms, result 0 00:23:53.773 [2024-12-10 11:32:15.795644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.773 [2024-12-10 11:32:15.812174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.707  [2024-12-10T11:32:18.250Z] Copying: 25/256 [MB] (25 MBps) [2024-12-10T11:32:19.185Z] Copying: 50/256 [MB] (25 MBps) [2024-12-10T11:32:20.121Z] Copying: 75/256 [MB] (25 MBps) [2024-12-10T11:32:21.057Z] Copying: 98/256 [MB] (23 MBps) [2024-12-10T11:32:22.043Z] Copying: 123/256 [MB] (24 MBps) [2024-12-10T11:32:22.978Z] Copying: 147/256 [MB] (24 MBps) [2024-12-10T11:32:23.913Z] Copying: 171/256 [MB] (24 MBps) [2024-12-10T11:32:24.849Z] Copying: 195/256 [MB] (23 MBps) [2024-12-10T11:32:26.224Z] Copying: 220/256 [MB] (24 MBps) [2024-12-10T11:32:26.483Z] Copying: 242/256 [MB] (22 MBps) [2024-12-10T11:32:26.483Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-10 11:32:26.368685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.316 [2024-12-10 11:32:26.381223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.316 [2024-12-10 11:32:26.381280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.316 [2024-12-10 11:32:26.381299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.316 [2024-12-10 11:32:26.381318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.316 [2024-12-10 11:32:26.381351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:04.316 [2024-12-10 11:32:26.384832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.316 [2024-12-10 11:32:26.384861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.316 [2024-12-10 11:32:26.384875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:24:04.316 [2024-12-10 11:32:26.384900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.386621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.386673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.317 [2024-12-10 11:32:26.386690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:24:04.317 [2024-12-10 11:32:26.386702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.393855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.393900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.317 [2024-12-10 11:32:26.393929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.127 ms 00:24:04.317 [2024-12-10 11:32:26.393941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.401630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.401673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.317 [2024-12-10 11:32:26.401689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.626 ms 00:24:04.317 [2024-12-10 11:32:26.401700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.433465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.433508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.317 [2024-12-10 11:32:26.433540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.714 ms 00:24:04.317 [2024-12-10 11:32:26.433552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.451473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.451519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.317 [2024-12-10 11:32:26.451544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.852 ms 00:24:04.317 [2024-12-10 11:32:26.451556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.317 [2024-12-10 11:32:26.451742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.317 [2024-12-10 11:32:26.451763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.317 [2024-12-10 11:32:26.451777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:24:04.317 [2024-12-10 11:32:26.451803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.577 [2024-12-10 11:32:26.483666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.577 [2024-12-10 11:32:26.483718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.577 [2024-12-10 11:32:26.483735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.838 ms 00:24:04.577 [2024-12-10 11:32:26.483746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.577 [2024-12-10 11:32:26.515019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.577 [2024-12-10 11:32:26.515070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.577 [2024-12-10 11:32:26.515085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.171 ms 00:24:04.577 [2024-12-10 11:32:26.515095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.577 [2024-12-10 11:32:26.546806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.577 [2024-12-10 11:32:26.546848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.577 [2024-12-10 11:32:26.546864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.634 ms 00:24:04.577 [2024-12-10 11:32:26.546876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.577 [2024-12-10 11:32:26.578027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.577 [2024-12-10 11:32:26.578064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.577 [2024-12-10 11:32:26.578079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.053 ms 00:24:04.577 [2024-12-10 11:32:26.578089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.577 [2024-12-10 11:32:26.578152] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.577 [2024-12-10 11:32:26.578176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.578989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.579000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.579012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.577 [2024-12-10 11:32:26.579023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.578 [2024-12-10 11:32:26.579389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.578 [2024-12-10 11:32:26.579400] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:04.578 [2024-12-10 11:32:26.579412] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.578 [2024-12-10 11:32:26.579423] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.578 [2024-12-10 11:32:26.579433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.578 [2024-12-10 11:32:26.579443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.578 [2024-12-10 11:32:26.579454] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.578 [2024-12-10 11:32:26.579465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.578 [2024-12-10 11:32:26.579481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.578 [2024-12-10 11:32:26.579491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.578 [2024-12-10 11:32:26.579500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.578 [2024-12-10 11:32:26.579511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.578 [2024-12-10 11:32:26.579522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.578 [2024-12-10 11:32:26.579534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:24:04.578 [2024-12-10 11:32:26.579574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.596510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.578 [2024-12-10 11:32:26.596563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.578 [2024-12-10 11:32:26.596578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.911 ms 00:24:04.578 [2024-12-10 11:32:26.596589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.597133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.578 [2024-12-10 11:32:26.597159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.578 [2024-12-10 11:32:26.597173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:24:04.578 [2024-12-10 11:32:26.597183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.643793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.578 [2024-12-10 11:32:26.643858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.578 [2024-12-10 11:32:26.643893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.578 [2024-12-10 11:32:26.643911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.644026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.578 [2024-12-10 11:32:26.644044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.578 [2024-12-10 11:32:26.644056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.578 [2024-12-10 11:32:26.644067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.644133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.578 [2024-12-10 11:32:26.644152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.578 [2024-12-10 11:32:26.644164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.578 [2024-12-10 11:32:26.644174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.578 [2024-12-10 11:32:26.644220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.578 [2024-12-10 11:32:26.644233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.578 [2024-12-10 11:32:26.644258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.578 [2024-12-10 11:32:26.644268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.746067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.746163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.837 [2024-12-10 11:32:26.746180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.837 [2024-12-10 11:32:26.746192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.831417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.831492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.837 [2024-12-10 11:32:26.831509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.837 [2024-12-10 11:32:26.831536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.831639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.831654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.837 [2024-12-10 11:32:26.831666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.837 [2024-12-10 11:32:26.831729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.831765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.831787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.837 [2024-12-10 11:32:26.831799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.837 [2024-12-10 11:32:26.831809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.831948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.831969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.837 [2024-12-10 11:32:26.831982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.837 [2024-12-10 11:32:26.831993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.837 [2024-12-10 11:32:26.832050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.837 [2024-12-10 11:32:26.832067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:04.838 [2024-12-10 11:32:26.832085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.838 [2024-12-10 11:32:26.832096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.838 [2024-12-10 11:32:26.832155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.838 [2024-12-10 11:32:26.832170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.838 [2024-12-10 11:32:26.832195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.838 [2024-12-10 11:32:26.832205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.838 [2024-12-10 11:32:26.832267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.838 [2024-12-10 11:32:26.832287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.838 [2024-12-10 11:32:26.832298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.838 [2024-12-10 11:32:26.832308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.838 [2024-12-10 11:32:26.832471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.285 ms, result 0 00:24:05.772 00:24:05.772 00:24:05.772 11:32:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78726 00:24:05.772 11:32:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:05.772 11:32:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78726 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78726 ']' 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.772 11:32:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:06.030 [2024-12-10 11:32:28.044938] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:06.030 [2024-12-10 11:32:28.045126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78726 ] 00:24:06.289 [2024-12-10 11:32:28.227304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.289 [2024-12-10 11:32:28.325585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.224 11:32:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.224 11:32:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:07.224 11:32:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:07.224 [2024-12-10 11:32:29.377354] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.224 [2024-12-10 11:32:29.377443] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.483 [2024-12-10 11:32:29.564631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-10 11:32:29.564711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.483 [2024-12-10 11:32:29.564733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.483 [2024-12-10 11:32:29.564745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-10 11:32:29.568871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-10 11:32:29.568923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.483 [2024-12-10 11:32:29.568941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:24:07.483 [2024-12-10 11:32:29.568952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-10 11:32:29.569098] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.483 [2024-12-10 11:32:29.570091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.483 [2024-12-10 11:32:29.570142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-10 11:32:29.570155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.483 [2024-12-10 11:32:29.570169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:24:07.483 [2024-12-10 11:32:29.570180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-10 11:32:29.571496] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.483 [2024-12-10 11:32:29.587820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.587887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.484 [2024-12-10 11:32:29.587906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.329 ms 00:24:07.484 [2024-12-10 11:32:29.587921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.588038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.588062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.484 [2024-12-10 11:32:29.588075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:07.484 [2024-12-10 11:32:29.588088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.592672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.592722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.484 [2024-12-10 11:32:29.592738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.520 ms 00:24:07.484 [2024-12-10 11:32:29.592754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.592936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.592992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.484 [2024-12-10 11:32:29.593006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:24:07.484 [2024-12-10 11:32:29.593024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.593060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.593077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.484 [2024-12-10 11:32:29.593090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.484 [2024-12-10 11:32:29.593102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.593135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:07.484 [2024-12-10 11:32:29.597373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.597402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.484 [2024-12-10 11:32:29.597419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:24:07.484 [2024-12-10 11:32:29.597430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.597498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.597514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.484 [2024-12-10 11:32:29.597551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:07.484 [2024-12-10 11:32:29.597568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.597603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.484 [2024-12-10 11:32:29.597648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.484 [2024-12-10 11:32:29.597712] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.484 [2024-12-10 11:32:29.597737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.484 [2024-12-10 11:32:29.597875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.484 [2024-12-10 11:32:29.597892] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.484 [2024-12-10 11:32:29.597919] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.484 [2024-12-10 11:32:29.597951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.484 [2024-12-10 11:32:29.597970] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.484 [2024-12-10 11:32:29.597984] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:07.484 [2024-12-10 11:32:29.597999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.484 [2024-12-10 11:32:29.598011] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.484 [2024-12-10 11:32:29.598030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.484 [2024-12-10 11:32:29.598043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.598060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.484 [2024-12-10 11:32:29.598072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:24:07.484 [2024-12-10 11:32:29.598088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.598231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.484 [2024-12-10 11:32:29.598261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.484 [2024-12-10 11:32:29.598275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:07.484 [2024-12-10 11:32:29.598298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.484 [2024-12-10 11:32:29.598417] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.484 [2024-12-10 11:32:29.598440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.484 [2024-12-10 11:32:29.598454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.484 [2024-12-10 11:32:29.598503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.484 [2024-12-10 11:32:29.598548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.484 [2024-12-10 11:32:29.598577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.484 [2024-12-10 11:32:29.598593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:07.484 [2024-12-10 11:32:29.598605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.484 [2024-12-10 11:32:29.598620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.484 [2024-12-10 11:32:29.598651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:07.484 [2024-12-10 11:32:29.598671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.484 [2024-12-10 11:32:29.598699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.484 [2024-12-10 11:32:29.598754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.484 [2024-12-10 11:32:29.598802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.484 [2024-12-10 11:32:29.598841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.484 [2024-12-10 11:32:29.598899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.484 [2024-12-10 11:32:29.598926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.484 [2024-12-10 11:32:29.598937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:07.484 [2024-12-10 11:32:29.598953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.484 [2024-12-10 11:32:29.598979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.484 [2024-12-10 11:32:29.598994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:07.484 [2024-12-10 11:32:29.599006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.484 [2024-12-10 11:32:29.599021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.484 [2024-12-10 11:32:29.599032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:07.484 [2024-12-10 11:32:29.599050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.599060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.484 [2024-12-10 11:32:29.599075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:07.484 [2024-12-10 11:32:29.599086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.599100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.484 [2024-12-10 11:32:29.599150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.484 [2024-12-10 11:32:29.599166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.484 [2024-12-10 11:32:29.599177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.484 [2024-12-10 11:32:29.599193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.484 [2024-12-10 11:32:29.599204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.484 [2024-12-10 11:32:29.599220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.484 [2024-12-10 11:32:29.599232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.484 [2024-12-10 11:32:29.599247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.485 [2024-12-10 11:32:29.599258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.485 [2024-12-10 11:32:29.599275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.485 [2024-12-10 11:32:29.599289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:07.485 [2024-12-10 11:32:29.599325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:07.485 [2024-12-10 11:32:29.599341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:07.485 [2024-12-10 11:32:29.599353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:07.485 [2024-12-10 11:32:29.599369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:07.485 [2024-12-10 11:32:29.599381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:07.485 [2024-12-10 11:32:29.599397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:07.485 [2024-12-10 11:32:29.599409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:07.485 [2024-12-10 11:32:29.599424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:07.485 [2024-12-10 11:32:29.599437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:07.485 [2024-12-10 11:32:29.599539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.485 [2024-12-10 11:32:29.599552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.485 [2024-12-10 11:32:29.599586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.485 [2024-12-10 11:32:29.599602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.485 [2024-12-10 11:32:29.599615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.485 [2024-12-10 11:32:29.599663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.485 [2024-12-10 11:32:29.599679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.485 [2024-12-10 11:32:29.599697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:24:07.485 [2024-12-10 11:32:29.599715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.485 [2024-12-10 11:32:29.634755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.485 [2024-12-10 11:32:29.634807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.485 [2024-12-10 11:32:29.634830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.955 ms 00:24:07.485 [2024-12-10 11:32:29.634845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.485 [2024-12-10 11:32:29.635073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.485 [2024-12-10 11:32:29.635091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.485 [2024-12-10 11:32:29.635123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:07.485 [2024-12-10 11:32:29.635134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.677651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.677723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.743 [2024-12-10 11:32:29.677744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.478 ms 00:24:07.743 [2024-12-10 11:32:29.677756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.677895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.677912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.743 [2024-12-10 11:32:29.677941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.743 [2024-12-10 11:32:29.677951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.678279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.678311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.743 [2024-12-10 11:32:29.678330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:24:07.743 [2024-12-10 11:32:29.678343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.678502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.678520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.743 [2024-12-10 11:32:29.678538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:24:07.743 [2024-12-10 11:32:29.678549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.698079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.698140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.743 [2024-12-10 11:32:29.698161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.497 ms 00:24:07.743 [2024-12-10 11:32:29.698172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.731757] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:07.743 [2024-12-10 11:32:29.731814] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.743 [2024-12-10 11:32:29.731841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.731881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.743 [2024-12-10 11:32:29.731902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.507 ms 00:24:07.743 [2024-12-10 11:32:29.731931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.761729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.761786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.743 [2024-12-10 11:32:29.761811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.686 ms 00:24:07.743 [2024-12-10 11:32:29.761825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.777796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.777848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.743 [2024-12-10 11:32:29.777875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:24:07.743 [2024-12-10 11:32:29.777887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.793492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.793542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.743 [2024-12-10 11:32:29.793581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.470 ms 00:24:07.743 [2024-12-10 11:32:29.793594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.794546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.794582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.743 [2024-12-10 11:32:29.794603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:24:07.743 [2024-12-10 11:32:29.794616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.869616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.869696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.743 [2024-12-10 11:32:29.869719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.939 ms 00:24:07.743 [2024-12-10 11:32:29.869732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.882626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:07.743 [2024-12-10 11:32:29.896481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.896582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.743 [2024-12-10 11:32:29.896605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.609 ms 00:24:07.743 [2024-12-10 11:32:29.896620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.896791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.896815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.743 [2024-12-10 11:32:29.896829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.743 [2024-12-10 11:32:29.896842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.896908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.896927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.743 [2024-12-10 11:32:29.896941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:07.743 [2024-12-10 11:32:29.896957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.896988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.897008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.743 [2024-12-10 11:32:29.897020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.743 [2024-12-10 11:32:29.897034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.743 [2024-12-10 11:32:29.897088] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.743 [2024-12-10 11:32:29.897108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.743 [2024-12-10 11:32:29.897123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.743 [2024-12-10 11:32:29.897137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:07.743 [2024-12-10 11:32:29.897148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.005 [2024-12-10 11:32:29.928737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.005 [2024-12-10 11:32:29.928788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.005 [2024-12-10 11:32:29.928810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.549 ms 00:24:08.005 [2024-12-10 11:32:29.928823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.005 [2024-12-10 11:32:29.928974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.005 [2024-12-10 11:32:29.928995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.005 [2024-12-10 11:32:29.929011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:08.005 [2024-12-10 11:32:29.929026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.005 [2024-12-10 11:32:29.930080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.005 [2024-12-10 11:32:29.934398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.041 ms, result 0 00:24:08.005 [2024-12-10 11:32:29.935468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.005 Some configs were skipped because the RPC state that can call them passed over. 00:24:08.005 11:32:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:08.264 [2024-12-10 11:32:30.269700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.264 [2024-12-10 11:32:30.269762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:08.264 [2024-12-10 11:32:30.269783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:24:08.264 [2024-12-10 11:32:30.269798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.264 [2024-12-10 11:32:30.269845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.649 ms, result 0 00:24:08.264 true 00:24:08.264 11:32:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:08.523 [2024-12-10 11:32:30.557716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.523 [2024-12-10 11:32:30.557774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:08.523 [2024-12-10 11:32:30.557797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:24:08.523 [2024-12-10 11:32:30.557809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.523 [2024-12-10 11:32:30.557870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.219 ms, result 0 00:24:08.523 true 00:24:08.523 11:32:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78726 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78726 ']' 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78726 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78726 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.524 killing process with pid 78726 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78726' 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78726 00:24:08.524 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78726 00:24:09.461 [2024-12-10 11:32:31.531149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.461 [2024-12-10 11:32:31.531237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:09.461 [2024-12-10 11:32:31.531272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:09.461 [2024-12-10 11:32:31.531286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.461 [2024-12-10 11:32:31.531317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:09.461 [2024-12-10 11:32:31.534765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.461 [2024-12-10 11:32:31.534816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:09.462 [2024-12-10 11:32:31.534852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.421 ms 00:24:09.462 [2024-12-10 11:32:31.534864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.535273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.535303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:09.462 [2024-12-10 11:32:31.535320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:09.462 [2024-12-10 11:32:31.535332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.539277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.539335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:09.462 [2024-12-10 11:32:31.539358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:24:09.462 [2024-12-10 11:32:31.539369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.547102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.547151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:09.462 [2024-12-10 11:32:31.547183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.682 ms 00:24:09.462 [2024-12-10 11:32:31.547194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.559258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.559318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:09.462 [2024-12-10 11:32:31.559356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:24:09.462 [2024-12-10 11:32:31.559367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.568027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.568074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:09.462 [2024-12-10 11:32:31.568094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.610 ms 00:24:09.462 [2024-12-10 11:32:31.568107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.568280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.568300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:09.462 [2024-12-10 11:32:31.568338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:09.462 [2024-12-10 11:32:31.568353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.581032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.581110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:09.462 [2024-12-10 11:32:31.581146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.638 ms 00:24:09.462 [2024-12-10 11:32:31.581157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.593518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.593588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:09.462 [2024-12-10 11:32:31.593625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.310 ms 00:24:09.462 [2024-12-10 11:32:31.593637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.605494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.605564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:09.462 [2024-12-10 11:32:31.605599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.796 ms 00:24:09.462 [2024-12-10 11:32:31.605610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.617688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.462 [2024-12-10 11:32:31.617740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:09.462 [2024-12-10 11:32:31.617774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.991 ms 00:24:09.462 [2024-12-10 11:32:31.617785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.462 [2024-12-10 11:32:31.617831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:09.462 [2024-12-10 11:32:31.617852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.617999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:09.462 [2024-12-10 11:32:31.618843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.618998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:09.463 [2024-12-10 11:32:31.619647] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:09.463 [2024-12-10 11:32:31.619676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:09.463 [2024-12-10 11:32:31.619692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:09.463 [2024-12-10 11:32:31.619711] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:09.463 [2024-12-10 11:32:31.619731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:09.463 [2024-12-10 11:32:31.619754] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:09.463 [2024-12-10 11:32:31.619766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:09.463 [2024-12-10 11:32:31.619780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:09.463 [2024-12-10 11:32:31.619791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:09.463 [2024-12-10 11:32:31.619802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:09.463 [2024-12-10 11:32:31.619812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:09.463 [2024-12-10 11:32:31.619826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.463 [2024-12-10 11:32:31.619838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:09.463 [2024-12-10 11:32:31.619873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:24:09.463 [2024-12-10 11:32:31.619893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.637037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.723 [2024-12-10 11:32:31.637091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:09.723 [2024-12-10 11:32:31.637129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.085 ms 00:24:09.723 [2024-12-10 11:32:31.637140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.637744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.723 [2024-12-10 11:32:31.637792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:09.723 [2024-12-10 11:32:31.637813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:24:09.723 [2024-12-10 11:32:31.637824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.696931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.696994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.723 [2024-12-10 11:32:31.697030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.697042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.697185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.697203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.723 [2024-12-10 11:32:31.697221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.697232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.697314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.697342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.723 [2024-12-10 11:32:31.697366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.697383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.697430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.697454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.723 [2024-12-10 11:32:31.697480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.697499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.801560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.801664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.723 [2024-12-10 11:32:31.801687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.801699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.886142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.723 [2024-12-10 11:32:31.886168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.886187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.886324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.723 [2024-12-10 11:32:31.886346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.886359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.886434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.723 [2024-12-10 11:32:31.886464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.886485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.886702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.723 [2024-12-10 11:32:31.886722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.886735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.886842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.723 [2024-12-10 11:32:31.886873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.886887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.886977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.887009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.723 [2024-12-10 11:32:31.887035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.887048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.887116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.723 [2024-12-10 11:32:31.887138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.723 [2024-12-10 11:32:31.887157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.723 [2024-12-10 11:32:31.887169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.723 [2024-12-10 11:32:31.887385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.186 ms, result 0 00:24:10.659 11:32:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:10.659 11:32:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:10.918 [2024-12-10 11:32:32.908494] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:10.918 [2024-12-10 11:32:32.908705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78793 ] 00:24:11.176 [2024-12-10 11:32:33.091572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.176 [2024-12-10 11:32:33.193343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.435 [2024-12-10 11:32:33.516702] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.435 [2024-12-10 11:32:33.516791] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.695 [2024-12-10 11:32:33.678689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.695 [2024-12-10 11:32:33.678774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.695 [2024-12-10 11:32:33.678796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:11.695 [2024-12-10 11:32:33.678808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.695 [2024-12-10 11:32:33.682359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.695 [2024-12-10 11:32:33.682411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.695 [2024-12-10 11:32:33.682428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:24:11.695 [2024-12-10 11:32:33.682440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.695 [2024-12-10 11:32:33.682765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.695 [2024-12-10 11:32:33.683795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.695 [2024-12-10 11:32:33.683841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.695 [2024-12-10 11:32:33.683871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.695 [2024-12-10 11:32:33.683886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:24:11.695 [2024-12-10 11:32:33.683897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.695 [2024-12-10 11:32:33.685147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.695 [2024-12-10 11:32:33.702287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.695 [2024-12-10 11:32:33.702336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.695 [2024-12-10 11:32:33.702355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.141 ms 00:24:11.695 [2024-12-10 11:32:33.702368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.702495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.702518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.696 [2024-12-10 11:32:33.702532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:11.696 [2024-12-10 11:32:33.702543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.706935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.706982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.696 [2024-12-10 11:32:33.706999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:24:11.696 [2024-12-10 11:32:33.707011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.707139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.707160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.696 [2024-12-10 11:32:33.707173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:11.696 [2024-12-10 11:32:33.707184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.707228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.707243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.696 [2024-12-10 11:32:33.707255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:11.696 [2024-12-10 11:32:33.707266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.707298] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:11.696 [2024-12-10 11:32:33.711609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.711677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.696 [2024-12-10 11:32:33.711696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.320 ms 00:24:11.696 [2024-12-10 11:32:33.711707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.711782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.711801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.696 [2024-12-10 11:32:33.711814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:11.696 [2024-12-10 11:32:33.711825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.711877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.696 [2024-12-10 11:32:33.711916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.696 [2024-12-10 11:32:33.711961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.696 [2024-12-10 11:32:33.711981] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.696 [2024-12-10 11:32:33.712096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.696 [2024-12-10 11:32:33.712112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.696 [2024-12-10 11:32:33.712127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.696 [2024-12-10 11:32:33.712146] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712171] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:11.696 [2024-12-10 11:32:33.712197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.696 [2024-12-10 11:32:33.712223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.696 [2024-12-10 11:32:33.712234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.696 [2024-12-10 11:32:33.712245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.712255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.696 [2024-12-10 11:32:33.712267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:24:11.696 [2024-12-10 11:32:33.712277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.712372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.696 [2024-12-10 11:32:33.712392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.696 [2024-12-10 11:32:33.712403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:11.696 [2024-12-10 11:32:33.712414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.696 [2024-12-10 11:32:33.712544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.696 [2024-12-10 11:32:33.712560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.696 [2024-12-10 11:32:33.712572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.696 [2024-12-10 11:32:33.712604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.696 [2024-12-10 11:32:33.712634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.696 [2024-12-10 11:32:33.712693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.696 [2024-12-10 11:32:33.712721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:11.696 [2024-12-10 11:32:33.712732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.696 [2024-12-10 11:32:33.712743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.696 [2024-12-10 11:32:33.712754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:11.696 [2024-12-10 11:32:33.712778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.696 [2024-12-10 11:32:33.712800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.696 [2024-12-10 11:32:33.712833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.696 [2024-12-10 11:32:33.712864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.696 [2024-12-10 11:32:33.712898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.696 [2024-12-10 11:32:33.712928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.696 [2024-12-10 11:32:33.712948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.696 [2024-12-10 11:32:33.712958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:11.696 [2024-12-10 11:32:33.712968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.696 [2024-12-10 11:32:33.712978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.696 [2024-12-10 11:32:33.712988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:11.696 [2024-12-10 11:32:33.712998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.696 [2024-12-10 11:32:33.713008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.696 [2024-12-10 11:32:33.713018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:11.696 [2024-12-10 11:32:33.713027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.713037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.696 [2024-12-10 11:32:33.713047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:11.696 [2024-12-10 11:32:33.713064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.713078] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.696 [2024-12-10 11:32:33.713090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.696 [2024-12-10 11:32:33.713106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.696 [2024-12-10 11:32:33.713117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.696 [2024-12-10 11:32:33.713128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.696 [2024-12-10 11:32:33.713139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.696 [2024-12-10 11:32:33.713149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.696 [2024-12-10 11:32:33.713160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.696 [2024-12-10 11:32:33.713170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.696 [2024-12-10 11:32:33.713184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.696 [2024-12-10 11:32:33.713205] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.696 [2024-12-10 11:32:33.713228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.696 [2024-12-10 11:32:33.713244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:11.696 [2024-12-10 11:32:33.713255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:11.696 [2024-12-10 11:32:33.713266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:11.697 [2024-12-10 11:32:33.713277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:11.697 [2024-12-10 11:32:33.713293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:11.697 [2024-12-10 11:32:33.713311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:11.697 [2024-12-10 11:32:33.713324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:11.697 [2024-12-10 11:32:33.713335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:11.697 [2024-12-10 11:32:33.713347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:11.697 [2024-12-10 11:32:33.713359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:11.697 [2024-12-10 11:32:33.713420] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.697 [2024-12-10 11:32:33.713433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.697 [2024-12-10 11:32:33.713456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.697 [2024-12-10 11:32:33.713467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.697 [2024-12-10 11:32:33.713481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.697 [2024-12-10 11:32:33.713502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.713529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.697 [2024-12-10 11:32:33.713542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:24:11.697 [2024-12-10 11:32:33.713553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.746596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.746685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.697 [2024-12-10 11:32:33.746708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.940 ms 00:24:11.697 [2024-12-10 11:32:33.746720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.746922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.746941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.697 [2024-12-10 11:32:33.746954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:11.697 [2024-12-10 11:32:33.746965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.795834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.796108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.697 [2024-12-10 11:32:33.796148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.835 ms 00:24:11.697 [2024-12-10 11:32:33.796168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.796346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.796368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.697 [2024-12-10 11:32:33.796381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.697 [2024-12-10 11:32:33.796393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.796750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.796770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.697 [2024-12-10 11:32:33.796790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:11.697 [2024-12-10 11:32:33.796802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.796960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.796979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.697 [2024-12-10 11:32:33.796999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:11.697 [2024-12-10 11:32:33.797013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.813811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.813875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.697 [2024-12-10 11:32:33.813893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.757 ms 00:24:11.697 [2024-12-10 11:32:33.813904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.830165] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:11.697 [2024-12-10 11:32:33.830205] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.697 [2024-12-10 11:32:33.830240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.830252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.697 [2024-12-10 11:32:33.830264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.193 ms 00:24:11.697 [2024-12-10 11:32:33.830275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.697 [2024-12-10 11:32:33.859990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.697 [2024-12-10 11:32:33.860165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.697 [2024-12-10 11:32:33.860204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.620 ms 00:24:11.697 [2024-12-10 11:32:33.860229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.876200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.876255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.956 [2024-12-10 11:32:33.876288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.860 ms 00:24:11.956 [2024-12-10 11:32:33.876298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.892223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.892277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.956 [2024-12-10 11:32:33.892310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.832 ms 00:24:11.956 [2024-12-10 11:32:33.892320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.893222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.893415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.956 [2024-12-10 11:32:33.893454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:24:11.956 [2024-12-10 11:32:33.893471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.967810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.968096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.956 [2024-12-10 11:32:33.968237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.288 ms 00:24:11.956 [2024-12-10 11:32:33.968393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.981361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:11.956 [2024-12-10 11:32:33.995175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.995499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.956 [2024-12-10 11:32:33.995665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.553 ms 00:24:11.956 [2024-12-10 11:32:33.995802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.996037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.996132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.956 [2024-12-10 11:32:33.996260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:11.956 [2024-12-10 11:32:33.996430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.996581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.996677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.956 [2024-12-10 11:32:33.996809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:11.956 [2024-12-10 11:32:33.996954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.997059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.997131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.956 [2024-12-10 11:32:33.997245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:11.956 [2024-12-10 11:32:33.997307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:33.997538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.956 [2024-12-10 11:32:33.997708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:33.997784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.956 [2024-12-10 11:32:33.997907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:24:11.956 [2024-12-10 11:32:33.997971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:34.028579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:34.028826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.956 [2024-12-10 11:32:34.028961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.503 ms 00:24:11.956 [2024-12-10 11:32:34.029102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:34.029313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.956 [2024-12-10 11:32:34.029403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.956 [2024-12-10 11:32:34.029543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:11.956 [2024-12-10 11:32:34.029702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.956 [2024-12-10 11:32:34.030938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.956 [2024-12-10 11:32:34.035249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.866 ms, result 0 00:24:11.957 [2024-12-10 11:32:34.036285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:11.957 [2024-12-10 11:32:34.052479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.891  [2024-12-10T11:32:36.432Z] Copying: 24/256 [MB] (24 MBps) [2024-12-10T11:32:37.369Z] Copying: 48/256 [MB] (23 MBps) [2024-12-10T11:32:38.304Z] Copying: 73/256 [MB] (24 MBps) [2024-12-10T11:32:39.238Z] Copying: 96/256 [MB] (23 MBps) [2024-12-10T11:32:40.172Z] Copying: 120/256 [MB] (24 MBps) [2024-12-10T11:32:41.106Z] Copying: 143/256 [MB] (22 MBps) [2024-12-10T11:32:42.479Z] Copying: 167/256 [MB] (23 MBps) [2024-12-10T11:32:43.413Z] Copying: 190/256 [MB] (23 MBps) [2024-12-10T11:32:44.347Z] Copying: 214/256 [MB] (23 MBps) [2024-12-10T11:32:44.914Z] Copying: 237/256 [MB] (23 MBps) [2024-12-10T11:32:44.914Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-10 11:32:44.843265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.747 [2024-12-10 11:32:44.855938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.855995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.747 [2024-12-10 11:32:44.856016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.747 [2024-12-10 11:32:44.856029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.747 [2024-12-10 11:32:44.856063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:22.747 [2024-12-10 11:32:44.859422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.859456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.747 [2024-12-10 11:32:44.859488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:24:22.747 [2024-12-10 11:32:44.859499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.747 [2024-12-10 11:32:44.859813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.859835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.747 [2024-12-10 11:32:44.859849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:22.747 [2024-12-10 11:32:44.859871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.747 [2024-12-10 11:32:44.863660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.863690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.747 [2024-12-10 11:32:44.863706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.753 ms 00:24:22.747 [2024-12-10 11:32:44.863717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.747 [2024-12-10 11:32:44.871263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.871296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.747 [2024-12-10 11:32:44.871311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:24:22.747 [2024-12-10 11:32:44.871323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.747 [2024-12-10 11:32:44.903239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.747 [2024-12-10 11:32:44.903453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.747 [2024-12-10 11:32:44.903483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.851 ms 00:24:22.747 [2024-12-10 11:32:44.903496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:44.921830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:44.922041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.007 [2024-12-10 11:32:44.922073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.261 ms 00:24:23.007 [2024-12-10 11:32:44.922087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:44.922289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:44.922311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.007 [2024-12-10 11:32:44.922339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:23.007 [2024-12-10 11:32:44.922351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:44.956179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:44.956411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.007 [2024-12-10 11:32:44.956441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.802 ms 00:24:23.007 [2024-12-10 11:32:44.956455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:44.988349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:44.988394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.007 [2024-12-10 11:32:44.988428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.820 ms 00:24:23.007 [2024-12-10 11:32:44.988438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:45.019376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:45.019418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.007 [2024-12-10 11:32:45.019451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.858 ms 00:24:23.007 [2024-12-10 11:32:45.019461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:45.050149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.007 [2024-12-10 11:32:45.050190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.007 [2024-12-10 11:32:45.050223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.583 ms 00:24:23.007 [2024-12-10 11:32:45.050234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.007 [2024-12-10 11:32:45.050300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.007 [2024-12-10 11:32:45.050324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.007 [2024-12-10 11:32:45.050606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.050995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.008 [2024-12-10 11:32:45.051547] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.008 [2024-12-10 11:32:45.051558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:23.008 [2024-12-10 11:32:45.051570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.008 [2024-12-10 11:32:45.051581] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.008 [2024-12-10 11:32:45.051591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.008 [2024-12-10 11:32:45.051603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.008 [2024-12-10 11:32:45.051613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.008 [2024-12-10 11:32:45.051651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.008 [2024-12-10 11:32:45.051663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.008 [2024-12-10 11:32:45.051673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.008 [2024-12-10 11:32:45.051683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.008 [2024-12-10 11:32:45.051694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.008 [2024-12-10 11:32:45.051706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.008 [2024-12-10 11:32:45.051718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:24:23.008 [2024-12-10 11:32:45.051730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.008 [2024-12-10 11:32:45.068071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.008 [2024-12-10 11:32:45.068114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.008 [2024-12-10 11:32:45.068131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.314 ms 00:24:23.008 [2024-12-10 11:32:45.068159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.008 [2024-12-10 11:32:45.068670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.008 [2024-12-10 11:32:45.068720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.008 [2024-12-10 11:32:45.068736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:24:23.008 [2024-12-10 11:32:45.068748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.008 [2024-12-10 11:32:45.116766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.009 [2024-12-10 11:32:45.117003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.009 [2024-12-10 11:32:45.117042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.009 [2024-12-10 11:32:45.117055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.009 [2024-12-10 11:32:45.117176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.009 [2024-12-10 11:32:45.117194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.009 [2024-12-10 11:32:45.117206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.009 [2024-12-10 11:32:45.117217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.009 [2024-12-10 11:32:45.117304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.009 [2024-12-10 11:32:45.117324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.009 [2024-12-10 11:32:45.117337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.009 [2024-12-10 11:32:45.117356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.009 [2024-12-10 11:32:45.117382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.009 [2024-12-10 11:32:45.117396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.009 [2024-12-10 11:32:45.117408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.009 [2024-12-10 11:32:45.117420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.224130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.224201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.267 [2024-12-10 11:32:45.224220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.224241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.309843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:23.267 [2024-12-10 11:32:45.310210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:23.267 [2024-12-10 11:32:45.310343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:23.267 [2024-12-10 11:32:45.310436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:23.267 [2024-12-10 11:32:45.310612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:23.267 [2024-12-10 11:32:45.310739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:23.267 [2024-12-10 11:32:45.310824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.310895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.267 [2024-12-10 11:32:45.310924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:23.267 [2024-12-10 11:32:45.310936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.267 [2024-12-10 11:32:45.310947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.267 [2024-12-10 11:32:45.311115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.199 ms, result 0 00:24:24.200 00:24:24.200 00:24:24.200 11:32:46 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:24.200 11:32:46 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:24.766 11:32:46 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.024 [2024-12-10 11:32:46.948080] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:25.024 [2024-12-10 11:32:46.948236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78931 ] 00:24:25.024 [2024-12-10 11:32:47.123639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.281 [2024-12-10 11:32:47.228370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.539 [2024-12-10 11:32:47.554050] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.539 [2024-12-10 11:32:47.554140] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.798 [2024-12-10 11:32:47.716345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.798 [2024-12-10 11:32:47.716417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:25.798 [2024-12-10 11:32:47.716437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.798 [2024-12-10 11:32:47.716448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.798 [2024-12-10 11:32:47.719784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.798 [2024-12-10 11:32:47.719841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.798 [2024-12-10 11:32:47.719857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:24:25.798 [2024-12-10 11:32:47.719878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.798 [2024-12-10 11:32:47.720009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:25.798 [2024-12-10 11:32:47.720959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:25.798 [2024-12-10 11:32:47.720994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.798 [2024-12-10 11:32:47.721006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.799 [2024-12-10 11:32:47.721019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:24:25.799 [2024-12-10 11:32:47.721030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.722249] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:25.799 [2024-12-10 11:32:47.738983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.739038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:25.799 [2024-12-10 11:32:47.739054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.753 ms 00:24:25.799 [2024-12-10 11:32:47.739066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.739188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.739210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:25.799 [2024-12-10 11:32:47.739223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:25.799 [2024-12-10 11:32:47.739234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.743751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.743790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.799 [2024-12-10 11:32:47.743806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.458 ms 00:24:25.799 [2024-12-10 11:32:47.743817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.743958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.743980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.799 [2024-12-10 11:32:47.743993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:25.799 [2024-12-10 11:32:47.744004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.744049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.744065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:25.799 [2024-12-10 11:32:47.744077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:25.799 [2024-12-10 11:32:47.744087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.744118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:25.799 [2024-12-10 11:32:47.748459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.748523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.799 [2024-12-10 11:32:47.748553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:24:25.799 [2024-12-10 11:32:47.748565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.748637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.748668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:25.799 [2024-12-10 11:32:47.748685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:25.799 [2024-12-10 11:32:47.748696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.748734] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:25.799 [2024-12-10 11:32:47.748763] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:25.799 [2024-12-10 11:32:47.748806] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:25.799 [2024-12-10 11:32:47.748826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:25.799 [2024-12-10 11:32:47.748952] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:25.799 [2024-12-10 11:32:47.748966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:25.799 [2024-12-10 11:32:47.748980] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:25.799 [2024-12-10 11:32:47.748998] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749011] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:25.799 [2024-12-10 11:32:47.749034] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:25.799 [2024-12-10 11:32:47.749044] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:25.799 [2024-12-10 11:32:47.749053] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:25.799 [2024-12-10 11:32:47.749065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.749075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:25.799 [2024-12-10 11:32:47.749087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:25.799 [2024-12-10 11:32:47.749097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.749219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.799 [2024-12-10 11:32:47.749240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:25.799 [2024-12-10 11:32:47.749252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:25.799 [2024-12-10 11:32:47.749263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.799 [2024-12-10 11:32:47.749373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:25.799 [2024-12-10 11:32:47.749390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:25.799 [2024-12-10 11:32:47.749402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:25.799 [2024-12-10 11:32:47.749434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:25.799 [2024-12-10 11:32:47.749465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.799 [2024-12-10 11:32:47.749485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:25.799 [2024-12-10 11:32:47.749509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:25.799 [2024-12-10 11:32:47.749519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.799 [2024-12-10 11:32:47.749529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:25.799 [2024-12-10 11:32:47.749555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:25.799 [2024-12-10 11:32:47.749566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:25.799 [2024-12-10 11:32:47.749586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:25.799 [2024-12-10 11:32:47.749616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:25.799 [2024-12-10 11:32:47.749645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:25.799 [2024-12-10 11:32:47.749695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:25.799 [2024-12-10 11:32:47.749725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.799 [2024-12-10 11:32:47.749745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:25.799 [2024-12-10 11:32:47.749755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.799 [2024-12-10 11:32:47.749774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:25.799 [2024-12-10 11:32:47.749785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:25.799 [2024-12-10 11:32:47.749795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.799 [2024-12-10 11:32:47.749805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:25.799 [2024-12-10 11:32:47.749815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:25.799 [2024-12-10 11:32:47.749824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:25.799 [2024-12-10 11:32:47.749844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:25.799 [2024-12-10 11:32:47.749855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.799 [2024-12-10 11:32:47.749866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:25.799 [2024-12-10 11:32:47.749878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:25.800 [2024-12-10 11:32:47.749894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.800 [2024-12-10 11:32:47.749904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.800 [2024-12-10 11:32:47.749916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:25.800 [2024-12-10 11:32:47.749927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:25.800 [2024-12-10 11:32:47.749937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:25.800 [2024-12-10 11:32:47.749947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:25.800 [2024-12-10 11:32:47.749957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:25.800 [2024-12-10 11:32:47.749967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:25.800 [2024-12-10 11:32:47.749979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:25.800 [2024-12-10 11:32:47.749992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:25.800 [2024-12-10 11:32:47.750031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:25.800 [2024-12-10 11:32:47.750042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:25.800 [2024-12-10 11:32:47.750053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:25.800 [2024-12-10 11:32:47.750064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:25.800 [2024-12-10 11:32:47.750074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:25.800 [2024-12-10 11:32:47.750084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:25.800 [2024-12-10 11:32:47.750095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:25.800 [2024-12-10 11:32:47.750105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:25.800 [2024-12-10 11:32:47.750116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:25.800 [2024-12-10 11:32:47.750168] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:25.800 [2024-12-10 11:32:47.750180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:25.800 [2024-12-10 11:32:47.750204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:25.800 [2024-12-10 11:32:47.750214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:25.800 [2024-12-10 11:32:47.750225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:25.800 [2024-12-10 11:32:47.750236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.750251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:25.800 [2024-12-10 11:32:47.750262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:24:25.800 [2024-12-10 11:32:47.750273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.783842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.783908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.800 [2024-12-10 11:32:47.783927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.495 ms 00:24:25.800 [2024-12-10 11:32:47.783939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.784138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.784159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:25.800 [2024-12-10 11:32:47.784173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:25.800 [2024-12-10 11:32:47.784184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.838257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.838329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.800 [2024-12-10 11:32:47.838352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.039 ms 00:24:25.800 [2024-12-10 11:32:47.838381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.838565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.838586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.800 [2024-12-10 11:32:47.838599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.800 [2024-12-10 11:32:47.838610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.838974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.839000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.800 [2024-12-10 11:32:47.839020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:24:25.800 [2024-12-10 11:32:47.839031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.839193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.839213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.800 [2024-12-10 11:32:47.839225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:25.800 [2024-12-10 11:32:47.839235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.856388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.856443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.800 [2024-12-10 11:32:47.856459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.122 ms 00:24:25.800 [2024-12-10 11:32:47.856470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.873145] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:25.800 [2024-12-10 11:32:47.873201] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:25.800 [2024-12-10 11:32:47.873218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.873230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:25.800 [2024-12-10 11:32:47.873244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.588 ms 00:24:25.800 [2024-12-10 11:32:47.873254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.903842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.903891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:25.800 [2024-12-10 11:32:47.903909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.484 ms 00:24:25.800 [2024-12-10 11:32:47.903921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.920319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.920373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:25.800 [2024-12-10 11:32:47.920389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.293 ms 00:24:25.800 [2024-12-10 11:32:47.920400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.936425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.936477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:25.800 [2024-12-10 11:32:47.936491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.926 ms 00:24:25.800 [2024-12-10 11:32:47.936502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.800 [2024-12-10 11:32:47.937385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.800 [2024-12-10 11:32:47.937418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:25.800 [2024-12-10 11:32:47.937433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:24:25.800 [2024-12-10 11:32:47.937443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.010889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.010954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.059 [2024-12-10 11:32:48.010973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.411 ms 00:24:26.059 [2024-12-10 11:32:48.010985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.024103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:26.059 [2024-12-10 11:32:48.038282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.038358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.059 [2024-12-10 11:32:48.038377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.114 ms 00:24:26.059 [2024-12-10 11:32:48.038398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.038562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.038583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.059 [2024-12-10 11:32:48.038596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:26.059 [2024-12-10 11:32:48.038608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.038710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.038730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.059 [2024-12-10 11:32:48.038742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:26.059 [2024-12-10 11:32:48.038759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.038817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.038835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.059 [2024-12-10 11:32:48.038846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:26.059 [2024-12-10 11:32:48.038857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.038905] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.059 [2024-12-10 11:32:48.038921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.038932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.059 [2024-12-10 11:32:48.038943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:26.059 [2024-12-10 11:32:48.038954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.069405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.069480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.059 [2024-12-10 11:32:48.069498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.422 ms 00:24:26.059 [2024-12-10 11:32:48.069509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.069708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.059 [2024-12-10 11:32:48.069730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.059 [2024-12-10 11:32:48.069743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:26.059 [2024-12-10 11:32:48.069754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.059 [2024-12-10 11:32:48.070758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.059 [2024-12-10 11:32:48.075288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.055 ms, result 0 00:24:26.059 [2024-12-10 11:32:48.076225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.059 [2024-12-10 11:32:48.092753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.324  [2024-12-10T11:32:48.491Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-12-10 11:32:48.254774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.324 [2024-12-10 11:32:48.267875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.267952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.324 [2024-12-10 11:32:48.267977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:26.324 [2024-12-10 11:32:48.267989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.268024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:26.324 [2024-12-10 11:32:48.271347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.271391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.324 [2024-12-10 11:32:48.271404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:24:26.324 [2024-12-10 11:32:48.271414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.273375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.273428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.324 [2024-12-10 11:32:48.273443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:24:26.324 [2024-12-10 11:32:48.273453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.277332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.277382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.324 [2024-12-10 11:32:48.277396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.851 ms 00:24:26.324 [2024-12-10 11:32:48.277407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.284541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.284586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:26.324 [2024-12-10 11:32:48.284598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.094 ms 00:24:26.324 [2024-12-10 11:32:48.284608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.313961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.314054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.324 [2024-12-10 11:32:48.314072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.269 ms 00:24:26.324 [2024-12-10 11:32:48.314083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.331786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.324 [2024-12-10 11:32:48.331870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.324 [2024-12-10 11:32:48.331906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.561 ms 00:24:26.324 [2024-12-10 11:32:48.331918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.324 [2024-12-10 11:32:48.332106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.325 [2024-12-10 11:32:48.332127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.325 [2024-12-10 11:32:48.332155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:26.325 [2024-12-10 11:32:48.332167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.325 [2024-12-10 11:32:48.362305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.325 [2024-12-10 11:32:48.362358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.325 [2024-12-10 11:32:48.362373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.114 ms 00:24:26.325 [2024-12-10 11:32:48.362384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.325 [2024-12-10 11:32:48.392404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.325 [2024-12-10 11:32:48.392453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.325 [2024-12-10 11:32:48.392467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.949 ms 00:24:26.325 [2024-12-10 11:32:48.392477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.325 [2024-12-10 11:32:48.424184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.325 [2024-12-10 11:32:48.424306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.325 [2024-12-10 11:32:48.424324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.640 ms 00:24:26.325 [2024-12-10 11:32:48.424335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.325 [2024-12-10 11:32:48.456244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.325 [2024-12-10 11:32:48.456287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.325 [2024-12-10 11:32:48.456305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.750 ms 00:24:26.325 [2024-12-10 11:32:48.456316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.325 [2024-12-10 11:32:48.456386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.325 [2024-12-10 11:32:48.456411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.456991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.325 [2024-12-10 11:32:48.457237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.326 [2024-12-10 11:32:48.457609] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.326 [2024-12-10 11:32:48.457620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:26.326 [2024-12-10 11:32:48.457645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:26.326 [2024-12-10 11:32:48.457657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:26.326 [2024-12-10 11:32:48.457667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:26.326 [2024-12-10 11:32:48.457678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:26.326 [2024-12-10 11:32:48.457688] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.326 [2024-12-10 11:32:48.457699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.326 [2024-12-10 11:32:48.457716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.326 [2024-12-10 11:32:48.457726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.326 [2024-12-10 11:32:48.457735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.326 [2024-12-10 11:32:48.457746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.326 [2024-12-10 11:32:48.457757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.326 [2024-12-10 11:32:48.457769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:24:26.326 [2024-12-10 11:32:48.457780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.326 [2024-12-10 11:32:48.474756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.326 [2024-12-10 11:32:48.474810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.326 [2024-12-10 11:32:48.474827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.947 ms 00:24:26.326 [2024-12-10 11:32:48.474838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.326 [2024-12-10 11:32:48.475317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.326 [2024-12-10 11:32:48.475345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.326 [2024-12-10 11:32:48.475359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:24:26.326 [2024-12-10 11:32:48.475370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.522163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.522245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.607 [2024-12-10 11:32:48.522262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.522284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.522438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.522456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.607 [2024-12-10 11:32:48.522468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.522478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.522587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.522605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.607 [2024-12-10 11:32:48.522617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.522628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.522660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.522688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.607 [2024-12-10 11:32:48.522703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.522714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.623962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.624022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.607 [2024-12-10 11:32:48.624040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.624059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.706830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.706909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.607 [2024-12-10 11:32:48.706925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.706937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.607 [2024-12-10 11:32:48.707050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.607 [2024-12-10 11:32:48.707140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.607 [2024-12-10 11:32:48.707344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.607 [2024-12-10 11:32:48.707442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.607 [2024-12-10 11:32:48.707534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.607 [2024-12-10 11:32:48.707621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.607 [2024-12-10 11:32:48.707655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.607 [2024-12-10 11:32:48.707667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-10 11:32:48.707839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.027 ms, result 0 00:24:27.543 00:24:27.543 00:24:27.543 11:32:49 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78963 00:24:27.543 11:32:49 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:27.543 11:32:49 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78963 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78963 ']' 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.543 11:32:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:27.802 [2024-12-10 11:32:49.791899] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:27.802 [2024-12-10 11:32:49.792063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78963 ] 00:24:28.059 [2024-12-10 11:32:49.978129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.059 [2024-12-10 11:32:50.084336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.990 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.990 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:28.990 11:32:50 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:29.248 [2024-12-10 11:32:51.218742] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:29.248 [2024-12-10 11:32:51.218820] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:29.248 [2024-12-10 11:32:51.407191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.248 [2024-12-10 11:32:51.407263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:29.248 [2024-12-10 11:32:51.407293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.248 [2024-12-10 11:32:51.407307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.248 [2024-12-10 11:32:51.411200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.248 [2024-12-10 11:32:51.411255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.248 [2024-12-10 11:32:51.411276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:24:29.248 [2024-12-10 11:32:51.411288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.248 [2024-12-10 11:32:51.411436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:29.248 [2024-12-10 11:32:51.412400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:29.248 [2024-12-10 11:32:51.412446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.248 [2024-12-10 11:32:51.412462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.248 [2024-12-10 11:32:51.412477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:24:29.248 [2024-12-10 11:32:51.412489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.248 [2024-12-10 11:32:51.413663] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:29.507 [2024-12-10 11:32:51.430036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.507 [2024-12-10 11:32:51.430101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:29.507 [2024-12-10 11:32:51.430123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:24:29.507 [2024-12-10 11:32:51.430142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.507 [2024-12-10 11:32:51.430293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.507 [2024-12-10 11:32:51.430323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:29.507 [2024-12-10 11:32:51.430338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:29.507 [2024-12-10 11:32:51.430352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.507 [2024-12-10 11:32:51.434776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.507 [2024-12-10 11:32:51.434840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.507 [2024-12-10 11:32:51.434857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.354 ms 00:24:29.507 [2024-12-10 11:32:51.434871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.507 [2024-12-10 11:32:51.435070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.507 [2024-12-10 11:32:51.435101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.507 [2024-12-10 11:32:51.435116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:29.508 [2024-12-10 11:32:51.435143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.435190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.508 [2024-12-10 11:32:51.435216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:29.508 [2024-12-10 11:32:51.435230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:29.508 [2024-12-10 11:32:51.435247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.435286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:29.508 [2024-12-10 11:32:51.439689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.508 [2024-12-10 11:32:51.439734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.508 [2024-12-10 11:32:51.439754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:24:29.508 [2024-12-10 11:32:51.439766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.439846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.508 [2024-12-10 11:32:51.439876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:29.508 [2024-12-10 11:32:51.439894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:29.508 [2024-12-10 11:32:51.439909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.439942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:29.508 [2024-12-10 11:32:51.439971] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:29.508 [2024-12-10 11:32:51.440028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:29.508 [2024-12-10 11:32:51.440054] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:29.508 [2024-12-10 11:32:51.440172] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:29.508 [2024-12-10 11:32:51.440189] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:29.508 [2024-12-10 11:32:51.440210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:29.508 [2024-12-10 11:32:51.440225] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440241] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440254] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:29.508 [2024-12-10 11:32:51.440268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:29.508 [2024-12-10 11:32:51.440279] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:29.508 [2024-12-10 11:32:51.440295] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:29.508 [2024-12-10 11:32:51.440308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.508 [2024-12-10 11:32:51.440321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:29.508 [2024-12-10 11:32:51.440333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:24:29.508 [2024-12-10 11:32:51.440346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.440478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.508 [2024-12-10 11:32:51.440507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:29.508 [2024-12-10 11:32:51.440522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:29.508 [2024-12-10 11:32:51.440535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.508 [2024-12-10 11:32:51.440671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:29.508 [2024-12-10 11:32:51.440694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:29.508 [2024-12-10 11:32:51.440707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:29.508 [2024-12-10 11:32:51.440747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:29.508 [2024-12-10 11:32:51.440784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.508 [2024-12-10 11:32:51.440808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:29.508 [2024-12-10 11:32:51.440820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:29.508 [2024-12-10 11:32:51.440831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.508 [2024-12-10 11:32:51.440843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:29.508 [2024-12-10 11:32:51.440854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:29.508 [2024-12-10 11:32:51.440866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:29.508 [2024-12-10 11:32:51.440889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:29.508 [2024-12-10 11:32:51.440936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.508 [2024-12-10 11:32:51.440960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:29.508 [2024-12-10 11:32:51.440985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:29.508 [2024-12-10 11:32:51.440997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.508 [2024-12-10 11:32:51.441014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:29.508 [2024-12-10 11:32:51.441027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.508 [2024-12-10 11:32:51.441055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:29.508 [2024-12-10 11:32:51.441073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.508 [2024-12-10 11:32:51.441100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:29.508 [2024-12-10 11:32:51.441112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.508 [2024-12-10 11:32:51.441139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:29.508 [2024-12-10 11:32:51.441156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:29.508 [2024-12-10 11:32:51.441167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.508 [2024-12-10 11:32:51.441182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:29.508 [2024-12-10 11:32:51.441194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:29.508 [2024-12-10 11:32:51.441215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:29.508 [2024-12-10 11:32:51.441242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:29.508 [2024-12-10 11:32:51.441254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441270] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:29.508 [2024-12-10 11:32:51.441288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:29.508 [2024-12-10 11:32:51.441305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.508 [2024-12-10 11:32:51.441317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.508 [2024-12-10 11:32:51.441334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:29.508 [2024-12-10 11:32:51.441346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:29.508 [2024-12-10 11:32:51.441361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:29.508 [2024-12-10 11:32:51.441373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:29.508 [2024-12-10 11:32:51.441388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:29.508 [2024-12-10 11:32:51.441400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:29.508 [2024-12-10 11:32:51.441418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:29.508 [2024-12-10 11:32:51.441433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.508 [2024-12-10 11:32:51.441458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:29.508 [2024-12-10 11:32:51.441471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:29.508 [2024-12-10 11:32:51.441490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:29.508 [2024-12-10 11:32:51.441503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:29.508 [2024-12-10 11:32:51.441519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:29.508 [2024-12-10 11:32:51.441531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:29.508 [2024-12-10 11:32:51.441547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:29.508 [2024-12-10 11:32:51.441560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:29.508 [2024-12-10 11:32:51.441576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:29.508 [2024-12-10 11:32:51.441588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:29.509 [2024-12-10 11:32:51.441682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:29.509 [2024-12-10 11:32:51.441697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:29.509 [2024-12-10 11:32:51.441731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:29.509 [2024-12-10 11:32:51.441747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:29.509 [2024-12-10 11:32:51.441760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:29.509 [2024-12-10 11:32:51.441778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.441791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:29.509 [2024-12-10 11:32:51.441808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:24:29.509 [2024-12-10 11:32:51.441826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.475040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.475103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.509 [2024-12-10 11:32:51.475132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.121 ms 00:24:29.509 [2024-12-10 11:32:51.475152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.475359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.475381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:29.509 [2024-12-10 11:32:51.475397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:29.509 [2024-12-10 11:32:51.475408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.516625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.516702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.509 [2024-12-10 11:32:51.516731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.175 ms 00:24:29.509 [2024-12-10 11:32:51.516747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.516901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.516921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.509 [2024-12-10 11:32:51.516941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:29.509 [2024-12-10 11:32:51.516954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.517302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.517342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.509 [2024-12-10 11:32:51.517365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:24:29.509 [2024-12-10 11:32:51.517378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.517544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.517570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.509 [2024-12-10 11:32:51.517590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:29.509 [2024-12-10 11:32:51.517603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.535953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.536013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.509 [2024-12-10 11:32:51.536036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:24:29.509 [2024-12-10 11:32:51.536048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.564799] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:29.509 [2024-12-10 11:32:51.564861] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.509 [2024-12-10 11:32:51.564886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.564900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.509 [2024-12-10 11:32:51.564918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.657 ms 00:24:29.509 [2024-12-10 11:32:51.564942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.594957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.595026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.509 [2024-12-10 11:32:51.595049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.880 ms 00:24:29.509 [2024-12-10 11:32:51.595062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.611344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.611398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.509 [2024-12-10 11:32:51.611422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.126 ms 00:24:29.509 [2024-12-10 11:32:51.611435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.627716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.627763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.509 [2024-12-10 11:32:51.627783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.169 ms 00:24:29.509 [2024-12-10 11:32:51.627795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.509 [2024-12-10 11:32:51.628679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.509 [2024-12-10 11:32:51.628712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.509 [2024-12-10 11:32:51.628730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:24:29.509 [2024-12-10 11:32:51.628743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.703153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.703245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.768 [2024-12-10 11:32:51.703270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.372 ms 00:24:29.768 [2024-12-10 11:32:51.703282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.716309] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:29.768 [2024-12-10 11:32:51.730578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.730671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.768 [2024-12-10 11:32:51.730698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.122 ms 00:24:29.768 [2024-12-10 11:32:51.730712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.730881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.730905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.768 [2024-12-10 11:32:51.730919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:29.768 [2024-12-10 11:32:51.730933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.731000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.731021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.768 [2024-12-10 11:32:51.731034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:29.768 [2024-12-10 11:32:51.731050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.731083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.731104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.768 [2024-12-10 11:32:51.731117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:29.768 [2024-12-10 11:32:51.731130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.731172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.768 [2024-12-10 11:32:51.731195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.731210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.768 [2024-12-10 11:32:51.731224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:29.768 [2024-12-10 11:32:51.731235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.762807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.762881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.768 [2024-12-10 11:32:51.762907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.530 ms 00:24:29.768 [2024-12-10 11:32:51.762919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.763066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.768 [2024-12-10 11:32:51.763088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.768 [2024-12-10 11:32:51.763104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:29.768 [2024-12-10 11:32:51.763119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.768 [2024-12-10 11:32:51.764051] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:29.768 [2024-12-10 11:32:51.768239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.501 ms, result 0 00:24:29.768 [2024-12-10 11:32:51.769413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.768 Some configs were skipped because the RPC state that can call them passed over. 00:24:29.768 11:32:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:30.026 [2024-12-10 11:32:52.051514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.026 [2024-12-10 11:32:52.051591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:30.026 [2024-12-10 11:32:52.051613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:24:30.026 [2024-12-10 11:32:52.051643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.026 [2024-12-10 11:32:52.051723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.686 ms, result 0 00:24:30.026 true 00:24:30.026 11:32:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:30.284 [2024-12-10 11:32:52.331489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.284 [2024-12-10 11:32:52.331550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:30.284 [2024-12-10 11:32:52.331574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:24:30.284 [2024-12-10 11:32:52.331586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.284 [2024-12-10 11:32:52.331659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.200 ms, result 0 00:24:30.284 true 00:24:30.284 11:32:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78963 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78963 ']' 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78963 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78963 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.284 killing process with pid 78963 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78963' 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78963 00:24:30.284 11:32:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78963 00:24:31.219 [2024-12-10 11:32:53.342086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.342166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:31.219 [2024-12-10 11:32:53.342188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:31.219 [2024-12-10 11:32:53.342203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.342239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:31.219 [2024-12-10 11:32:53.345618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.345665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:31.219 [2024-12-10 11:32:53.345686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.350 ms 00:24:31.219 [2024-12-10 11:32:53.345698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.346019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.346052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:31.219 [2024-12-10 11:32:53.346070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:24:31.219 [2024-12-10 11:32:53.346083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.350162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.350206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:31.219 [2024-12-10 11:32:53.350229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:24:31.219 [2024-12-10 11:32:53.350242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.357941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.357979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:31.219 [2024-12-10 11:32:53.357998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.646 ms 00:24:31.219 [2024-12-10 11:32:53.358011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.371476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.371573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:31.219 [2024-12-10 11:32:53.371601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.379 ms 00:24:31.219 [2024-12-10 11:32:53.371614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.380761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.380853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:31.219 [2024-12-10 11:32:53.380906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.018 ms 00:24:31.219 [2024-12-10 11:32:53.380919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.219 [2024-12-10 11:32:53.381091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.219 [2024-12-10 11:32:53.381112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:31.219 [2024-12-10 11:32:53.381145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:31.219 [2024-12-10 11:32:53.381156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-10 11:32:53.394953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-10 11:32:53.395045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:31.478 [2024-12-10 11:32:53.395083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.752 ms 00:24:31.478 [2024-12-10 11:32:53.395094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-10 11:32:53.408802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-10 11:32:53.408893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:31.478 [2024-12-10 11:32:53.408949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.570 ms 00:24:31.478 [2024-12-10 11:32:53.408961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-10 11:32:53.422045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-10 11:32:53.422160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.478 [2024-12-10 11:32:53.422200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.986 ms 00:24:31.478 [2024-12-10 11:32:53.422212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-10 11:32:53.435437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-10 11:32:53.435561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.478 [2024-12-10 11:32:53.435585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.099 ms 00:24:31.478 [2024-12-10 11:32:53.435597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-10 11:32:53.435676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.478 [2024-12-10 11:32:53.435707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.478 [2024-12-10 11:32:53.435992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.436996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.479 [2024-12-10 11:32:53.437184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.479 [2024-12-10 11:32:53.437207] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:31.479 [2024-12-10 11:32:53.437222] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:31.479 [2024-12-10 11:32:53.437236] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:31.479 [2024-12-10 11:32:53.437247] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:31.479 [2024-12-10 11:32:53.437261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:31.479 [2024-12-10 11:32:53.437273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.479 [2024-12-10 11:32:53.437287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.479 [2024-12-10 11:32:53.437298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.479 [2024-12-10 11:32:53.437310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.479 [2024-12-10 11:32:53.437321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.479 [2024-12-10 11:32:53.437336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.480 [2024-12-10 11:32:53.437348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.480 [2024-12-10 11:32:53.437363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:24:31.480 [2024-12-10 11:32:53.437374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.454518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.480 [2024-12-10 11:32:53.454618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.480 [2024-12-10 11:32:53.454678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.068 ms 00:24:31.480 [2024-12-10 11:32:53.454693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.455250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.480 [2024-12-10 11:32:53.455285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.480 [2024-12-10 11:32:53.455308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:24:31.480 [2024-12-10 11:32:53.455319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.517206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.480 [2024-12-10 11:32:53.517269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.480 [2024-12-10 11:32:53.517292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.480 [2024-12-10 11:32:53.517304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.517454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.480 [2024-12-10 11:32:53.517475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.480 [2024-12-10 11:32:53.517494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.480 [2024-12-10 11:32:53.517505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.517610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.480 [2024-12-10 11:32:53.517629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.480 [2024-12-10 11:32:53.517646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.480 [2024-12-10 11:32:53.517676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.517714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.480 [2024-12-10 11:32:53.517729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.480 [2024-12-10 11:32:53.517743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.480 [2024-12-10 11:32:53.517758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.480 [2024-12-10 11:32:53.625655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.480 [2024-12-10 11:32:53.625728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.480 [2024-12-10 11:32:53.625752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.480 [2024-12-10 11:32:53.625764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.738 [2024-12-10 11:32:53.713813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.738 [2024-12-10 11:32:53.713919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.738 [2024-12-10 11:32:53.713943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.738 [2024-12-10 11:32:53.713959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.738 [2024-12-10 11:32:53.714074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.738 [2024-12-10 11:32:53.714094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.738 [2024-12-10 11:32:53.714112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.738 [2024-12-10 11:32:53.714123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.738 [2024-12-10 11:32:53.714164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.738 [2024-12-10 11:32:53.714179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.738 [2024-12-10 11:32:53.714193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.738 [2024-12-10 11:32:53.714204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.738 [2024-12-10 11:32:53.714349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.738 [2024-12-10 11:32:53.714380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.738 [2024-12-10 11:32:53.714397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.738 [2024-12-10 11:32:53.714409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.738 [2024-12-10 11:32:53.714470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.739 [2024-12-10 11:32:53.714489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.739 [2024-12-10 11:32:53.714503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.739 [2024-12-10 11:32:53.714514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.739 [2024-12-10 11:32:53.714568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.739 [2024-12-10 11:32:53.714584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.739 [2024-12-10 11:32:53.714601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.739 [2024-12-10 11:32:53.714613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.739 [2024-12-10 11:32:53.714689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.739 [2024-12-10 11:32:53.714711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.739 [2024-12-10 11:32:53.714726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.739 [2024-12-10 11:32:53.714738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.739 [2024-12-10 11:32:53.714906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.787 ms, result 0 00:24:32.674 11:32:54 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.674 [2024-12-10 11:32:54.765088] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:32.674 [2024-12-10 11:32:54.765294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79028 ] 00:24:32.934 [2024-12-10 11:32:54.951802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.934 [2024-12-10 11:32:55.054361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.500 [2024-12-10 11:32:55.388314] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.500 [2024-12-10 11:32:55.388402] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.500 [2024-12-10 11:32:55.551758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.500 [2024-12-10 11:32:55.551823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.500 [2024-12-10 11:32:55.551860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.500 [2024-12-10 11:32:55.551897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.500 [2024-12-10 11:32:55.555491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.500 [2024-12-10 11:32:55.555551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.500 [2024-12-10 11:32:55.555584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:24:33.501 [2024-12-10 11:32:55.555595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.555905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.501 [2024-12-10 11:32:55.556974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.501 [2024-12-10 11:32:55.557017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.557046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.501 [2024-12-10 11:32:55.557074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:24:33.501 [2024-12-10 11:32:55.557084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.558582] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:33.501 [2024-12-10 11:32:55.575808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.575879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:33.501 [2024-12-10 11:32:55.575916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:24:33.501 [2024-12-10 11:32:55.575928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.576066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.576089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:33.501 [2024-12-10 11:32:55.576104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:33.501 [2024-12-10 11:32:55.576115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.580922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.580972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.501 [2024-12-10 11:32:55.581019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:24:33.501 [2024-12-10 11:32:55.581030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.581209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.581237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.501 [2024-12-10 11:32:55.581252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:24:33.501 [2024-12-10 11:32:55.581263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.581307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.581323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.501 [2024-12-10 11:32:55.581336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:33.501 [2024-12-10 11:32:55.581347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.581379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:33.501 [2024-12-10 11:32:55.585778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.585814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.501 [2024-12-10 11:32:55.585845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:24:33.501 [2024-12-10 11:32:55.585855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.585930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.585965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.501 [2024-12-10 11:32:55.585994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:33.501 [2024-12-10 11:32:55.586005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.586066] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:33.501 [2024-12-10 11:32:55.586098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:33.501 [2024-12-10 11:32:55.586142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:33.501 [2024-12-10 11:32:55.586169] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:33.501 [2024-12-10 11:32:55.586294] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.501 [2024-12-10 11:32:55.586321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.501 [2024-12-10 11:32:55.586338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.501 [2024-12-10 11:32:55.586360] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.501 [2024-12-10 11:32:55.586374] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.501 [2024-12-10 11:32:55.586387] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:33.501 [2024-12-10 11:32:55.586398] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.501 [2024-12-10 11:32:55.586409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.501 [2024-12-10 11:32:55.586420] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.501 [2024-12-10 11:32:55.586432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.586454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.501 [2024-12-10 11:32:55.586467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:24:33.501 [2024-12-10 11:32:55.586478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.586581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.501 [2024-12-10 11:32:55.586613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.501 [2024-12-10 11:32:55.586642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:33.501 [2024-12-10 11:32:55.586656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.501 [2024-12-10 11:32:55.586773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.501 [2024-12-10 11:32:55.586792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.501 [2024-12-10 11:32:55.586804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.501 [2024-12-10 11:32:55.586816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.501 [2024-12-10 11:32:55.586828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.501 [2024-12-10 11:32:55.586838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.501 [2024-12-10 11:32:55.586849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:33.501 [2024-12-10 11:32:55.586860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.501 [2024-12-10 11:32:55.586870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:33.501 [2024-12-10 11:32:55.586881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.501 [2024-12-10 11:32:55.586891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.501 [2024-12-10 11:32:55.586916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:33.501 [2024-12-10 11:32:55.586927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.501 [2024-12-10 11:32:55.586938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.501 [2024-12-10 11:32:55.586949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:33.501 [2024-12-10 11:32:55.586959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.501 [2024-12-10 11:32:55.586969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.501 [2024-12-10 11:32:55.586980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:33.501 [2024-12-10 11:32:55.586991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.501 [2024-12-10 11:32:55.587015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.501 [2024-12-10 11:32:55.587038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.501 [2024-12-10 11:32:55.587049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.501 [2024-12-10 11:32:55.587069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.501 [2024-12-10 11:32:55.587079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.501 [2024-12-10 11:32:55.587100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.501 [2024-12-10 11:32:55.587110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.501 [2024-12-10 11:32:55.587131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.501 [2024-12-10 11:32:55.587141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.501 [2024-12-10 11:32:55.587162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.501 [2024-12-10 11:32:55.587172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:33.501 [2024-12-10 11:32:55.587182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.501 [2024-12-10 11:32:55.587193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.501 [2024-12-10 11:32:55.587203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:33.501 [2024-12-10 11:32:55.587214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.501 [2024-12-10 11:32:55.587224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.501 [2024-12-10 11:32:55.587235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:33.501 [2024-12-10 11:32:55.587245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.502 [2024-12-10 11:32:55.587256] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.502 [2024-12-10 11:32:55.587267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.502 [2024-12-10 11:32:55.587284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.502 [2024-12-10 11:32:55.587295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.502 [2024-12-10 11:32:55.587306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.502 [2024-12-10 11:32:55.587317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.502 [2024-12-10 11:32:55.587327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.502 [2024-12-10 11:32:55.587338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.502 [2024-12-10 11:32:55.587354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.502 [2024-12-10 11:32:55.587365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.502 [2024-12-10 11:32:55.587377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.502 [2024-12-10 11:32:55.587392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:33.502 [2024-12-10 11:32:55.587416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:33.502 [2024-12-10 11:32:55.587428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:33.502 [2024-12-10 11:32:55.587439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:33.502 [2024-12-10 11:32:55.587451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:33.502 [2024-12-10 11:32:55.587462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:33.502 [2024-12-10 11:32:55.587474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:33.502 [2024-12-10 11:32:55.587485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:33.502 [2024-12-10 11:32:55.587496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:33.502 [2024-12-10 11:32:55.587508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:33.502 [2024-12-10 11:32:55.587567] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.502 [2024-12-10 11:32:55.587579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.502 [2024-12-10 11:32:55.587603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.502 [2024-12-10 11:32:55.587614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.502 [2024-12-10 11:32:55.587639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.502 [2024-12-10 11:32:55.587656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.502 [2024-12-10 11:32:55.587673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.502 [2024-12-10 11:32:55.587685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:24:33.502 [2024-12-10 11:32:55.587696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.502 [2024-12-10 11:32:55.622466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.502 [2024-12-10 11:32:55.622576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.502 [2024-12-10 11:32:55.622610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.684 ms 00:24:33.502 [2024-12-10 11:32:55.622623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.502 [2024-12-10 11:32:55.622875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.502 [2024-12-10 11:32:55.622906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:33.502 [2024-12-10 11:32:55.622922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:24:33.502 [2024-12-10 11:32:55.622934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.672719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.672786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.761 [2024-12-10 11:32:55.672828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.750 ms 00:24:33.761 [2024-12-10 11:32:55.672840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.673060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.673081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.761 [2024-12-10 11:32:55.673094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:33.761 [2024-12-10 11:32:55.673106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.673450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.673481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.761 [2024-12-10 11:32:55.673502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:33.761 [2024-12-10 11:32:55.673513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.673692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.673720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.761 [2024-12-10 11:32:55.673735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:33.761 [2024-12-10 11:32:55.673747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.690590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.690681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.761 [2024-12-10 11:32:55.690733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.809 ms 00:24:33.761 [2024-12-10 11:32:55.690745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.707463] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:33.761 [2024-12-10 11:32:55.707553] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:33.761 [2024-12-10 11:32:55.707590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.707602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:33.761 [2024-12-10 11:32:55.707617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.598 ms 00:24:33.761 [2024-12-10 11:32:55.707626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.738551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.738619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:33.761 [2024-12-10 11:32:55.738681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.693 ms 00:24:33.761 [2024-12-10 11:32:55.738694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.754882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.754937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:33.761 [2024-12-10 11:32:55.754971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:24:33.761 [2024-12-10 11:32:55.754998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.770617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.770681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:33.761 [2024-12-10 11:32:55.770713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.473 ms 00:24:33.761 [2024-12-10 11:32:55.770724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.771603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.771679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:33.761 [2024-12-10 11:32:55.771694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:24:33.761 [2024-12-10 11:32:55.771705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.841281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.841348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:33.761 [2024-12-10 11:32:55.841384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.542 ms 00:24:33.761 [2024-12-10 11:32:55.841396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.852960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:33.761 [2024-12-10 11:32:55.867178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.867245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:33.761 [2024-12-10 11:32:55.867265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:24:33.761 [2024-12-10 11:32:55.867285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.867458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.761 [2024-12-10 11:32:55.867480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:33.761 [2024-12-10 11:32:55.867494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:33.761 [2024-12-10 11:32:55.867519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.761 [2024-12-10 11:32:55.867644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.762 [2024-12-10 11:32:55.867687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:33.762 [2024-12-10 11:32:55.867703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:33.762 [2024-12-10 11:32:55.867720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.762 [2024-12-10 11:32:55.867769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.762 [2024-12-10 11:32:55.867789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:33.762 [2024-12-10 11:32:55.867802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:33.762 [2024-12-10 11:32:55.867813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.762 [2024-12-10 11:32:55.867859] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:33.762 [2024-12-10 11:32:55.867912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.762 [2024-12-10 11:32:55.867926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:33.762 [2024-12-10 11:32:55.867940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:33.762 [2024-12-10 11:32:55.867951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.762 [2024-12-10 11:32:55.898693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.762 [2024-12-10 11:32:55.898765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:33.762 [2024-12-10 11:32:55.898803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.706 ms 00:24:33.762 [2024-12-10 11:32:55.898814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.762 [2024-12-10 11:32:55.898988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.762 [2024-12-10 11:32:55.899010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:33.762 [2024-12-10 11:32:55.899024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:33.762 [2024-12-10 11:32:55.899034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.762 [2024-12-10 11:32:55.900129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.762 [2024-12-10 11:32:55.904363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.970 ms, result 0 00:24:33.762 [2024-12-10 11:32:55.905257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:33.762 [2024-12-10 11:32:55.921182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:35.137  [2024-12-10T11:32:58.240Z] Copying: 25/256 [MB] (25 MBps) [2024-12-10T11:32:59.174Z] Copying: 48/256 [MB] (22 MBps) [2024-12-10T11:33:00.107Z] Copying: 72/256 [MB] (24 MBps) [2024-12-10T11:33:01.040Z] Copying: 99/256 [MB] (26 MBps) [2024-12-10T11:33:01.974Z] Copying: 123/256 [MB] (24 MBps) [2024-12-10T11:33:03.347Z] Copying: 148/256 [MB] (25 MBps) [2024-12-10T11:33:04.278Z] Copying: 173/256 [MB] (24 MBps) [2024-12-10T11:33:05.214Z] Copying: 200/256 [MB] (27 MBps) [2024-12-10T11:33:06.147Z] Copying: 226/256 [MB] (26 MBps) [2024-12-10T11:33:06.405Z] Copying: 251/256 [MB] (24 MBps) [2024-12-10T11:33:06.664Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-10 11:33:06.407509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.497 [2024-12-10 11:33:06.426313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.426408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.497 [2024-12-10 11:33:06.426460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:44.497 [2024-12-10 11:33:06.426486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.426548] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:44.497 [2024-12-10 11:33:06.430666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.430722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.497 [2024-12-10 11:33:06.430766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:24:44.497 [2024-12-10 11:33:06.430791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.431236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.431291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.497 [2024-12-10 11:33:06.431325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:24:44.497 [2024-12-10 11:33:06.431350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.436073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.436130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.497 [2024-12-10 11:33:06.436165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:24:44.497 [2024-12-10 11:33:06.436190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.445661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.445721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:44.497 [2024-12-10 11:33:06.445756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.386 ms 00:24:44.497 [2024-12-10 11:33:06.445781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.484099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.484182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.497 [2024-12-10 11:33:06.484221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.213 ms 00:24:44.497 [2024-12-10 11:33:06.484245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.505640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.505717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.497 [2024-12-10 11:33:06.505769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.177 ms 00:24:44.497 [2024-12-10 11:33:06.505794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.506155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.506218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.497 [2024-12-10 11:33:06.506282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:24:44.497 [2024-12-10 11:33:06.506309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.544671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.544756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:44.497 [2024-12-10 11:33:06.544795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.305 ms 00:24:44.497 [2024-12-10 11:33:06.544819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.582938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.583023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:44.497 [2024-12-10 11:33:06.583067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.985 ms 00:24:44.497 [2024-12-10 11:33:06.583090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.620920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.621009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:44.497 [2024-12-10 11:33:06.621045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.693 ms 00:24:44.497 [2024-12-10 11:33:06.621069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.659178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.497 [2024-12-10 11:33:06.659262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:44.497 [2024-12-10 11:33:06.659300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.747 ms 00:24:44.497 [2024-12-10 11:33:06.659324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.497 [2024-12-10 11:33:06.659529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:44.497 [2024-12-10 11:33:06.659581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:44.497 [2024-12-10 11:33:06.659788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.659973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.660974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.661982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:44.498 [2024-12-10 11:33:06.662548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:44.499 [2024-12-10 11:33:06.662577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:44.499 [2024-12-10 11:33:06.662615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:44.499 [2024-12-10 11:33:06.662670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b10030af-2e86-48c7-be0f-b009016a690f 00:24:44.499 [2024-12-10 11:33:06.662699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:44.499 [2024-12-10 11:33:06.662722] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:44.499 [2024-12-10 11:33:06.662746] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:44.499 [2024-12-10 11:33:06.662771] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:44.756 [2024-12-10 11:33:06.662794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:44.756 [2024-12-10 11:33:06.662819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:44.756 [2024-12-10 11:33:06.662857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:44.756 [2024-12-10 11:33:06.662881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:44.756 [2024-12-10 11:33:06.662905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:44.756 [2024-12-10 11:33:06.662931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.756 [2024-12-10 11:33:06.662958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:44.757 [2024-12-10 11:33:06.662983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:24:44.757 [2024-12-10 11:33:06.663008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.683678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.757 [2024-12-10 11:33:06.683749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:44.757 [2024-12-10 11:33:06.683785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.611 ms 00:24:44.757 [2024-12-10 11:33:06.683811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.684483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.757 [2024-12-10 11:33:06.684540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:44.757 [2024-12-10 11:33:06.684575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:24:44.757 [2024-12-10 11:33:06.684600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.740626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.757 [2024-12-10 11:33:06.740723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.757 [2024-12-10 11:33:06.740758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.757 [2024-12-10 11:33:06.740794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.741024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.757 [2024-12-10 11:33:06.741065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.757 [2024-12-10 11:33:06.741095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.757 [2024-12-10 11:33:06.741121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.741254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.757 [2024-12-10 11:33:06.741307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.757 [2024-12-10 11:33:06.741338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.757 [2024-12-10 11:33:06.741362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.741428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.757 [2024-12-10 11:33:06.741459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.757 [2024-12-10 11:33:06.741487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.757 [2024-12-10 11:33:06.741511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.757 [2024-12-10 11:33:06.867223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.757 [2024-12-10 11:33:06.867311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.757 [2024-12-10 11:33:06.867347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.757 [2024-12-10 11:33:06.867370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.961578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.961676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.016 [2024-12-10 11:33:06.961707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.961727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.961850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.961882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.016 [2024-12-10 11:33:06.961906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.961929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.961990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.962045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.016 [2024-12-10 11:33:06.962071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.962094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.962275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.962314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.016 [2024-12-10 11:33:06.962339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.962360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.962453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.962484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.016 [2024-12-10 11:33:06.962521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.962543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.962648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.962682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.016 [2024-12-10 11:33:06.962706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.962726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.962815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.016 [2024-12-10 11:33:06.962858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.016 [2024-12-10 11:33:06.962884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.016 [2024-12-10 11:33:06.962905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.016 [2024-12-10 11:33:06.963142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.852 ms, result 0 00:24:45.952 00:24:45.952 00:24:45.952 11:33:07 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:46.519 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:46.519 11:33:08 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78963 00:24:46.519 11:33:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78963 ']' 00:24:46.519 11:33:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78963 00:24:46.519 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78963) - No such process 00:24:46.519 Process with pid 78963 is not found 00:24:46.519 11:33:08 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78963 is not found' 00:24:46.519 00:24:46.519 real 1m9.646s 00:24:46.519 user 1m34.551s 00:24:46.519 sys 0m7.137s 00:24:46.519 11:33:08 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.519 11:33:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:46.519 ************************************ 00:24:46.519 END TEST ftl_trim 00:24:46.519 ************************************ 00:24:46.519 11:33:08 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:46.519 11:33:08 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:46.519 11:33:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.519 11:33:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:46.778 ************************************ 00:24:46.778 START TEST ftl_restore 00:24:46.778 ************************************ 00:24:46.778 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:46.778 * Looking for test storage... 00:24:46.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:46.778 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.778 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.778 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.778 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:46.778 11:33:08 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.779 11:33:08 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.779 --rc genhtml_branch_coverage=1 00:24:46.779 --rc genhtml_function_coverage=1 00:24:46.779 --rc genhtml_legend=1 00:24:46.779 --rc geninfo_all_blocks=1 00:24:46.779 --rc geninfo_unexecuted_blocks=1 00:24:46.779 00:24:46.779 ' 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.779 --rc genhtml_branch_coverage=1 00:24:46.779 --rc genhtml_function_coverage=1 00:24:46.779 --rc genhtml_legend=1 00:24:46.779 --rc geninfo_all_blocks=1 00:24:46.779 --rc geninfo_unexecuted_blocks=1 00:24:46.779 00:24:46.779 ' 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.779 --rc genhtml_branch_coverage=1 00:24:46.779 --rc genhtml_function_coverage=1 00:24:46.779 --rc genhtml_legend=1 00:24:46.779 --rc geninfo_all_blocks=1 00:24:46.779 --rc geninfo_unexecuted_blocks=1 00:24:46.779 00:24:46.779 ' 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.779 --rc genhtml_branch_coverage=1 00:24:46.779 --rc genhtml_function_coverage=1 00:24:46.779 --rc genhtml_legend=1 00:24:46.779 --rc geninfo_all_blocks=1 00:24:46.779 --rc geninfo_unexecuted_blocks=1 00:24:46.779 00:24:46.779 ' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dvHexRZrsD 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79237 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:46.779 11:33:08 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79237 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79237 ']' 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.779 11:33:08 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:47.039 [2024-12-10 11:33:08.995836] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:47.039 [2024-12-10 11:33:08.996236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79237 ] 00:24:47.039 [2024-12-10 11:33:09.179907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.298 [2024-12-10 11:33:09.285619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.943 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.943 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:47.943 11:33:10 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:48.516 11:33:10 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:48.516 11:33:10 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:48.516 11:33:10 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:48.516 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:48.516 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.516 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:48.516 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:48.516 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:48.774 { 00:24:48.774 "name": "nvme0n1", 00:24:48.774 "aliases": [ 00:24:48.774 "1872594b-01fb-4498-af99-49c181377cef" 00:24:48.774 ], 00:24:48.774 "product_name": "NVMe disk", 00:24:48.774 "block_size": 4096, 00:24:48.774 "num_blocks": 1310720, 00:24:48.774 "uuid": "1872594b-01fb-4498-af99-49c181377cef", 00:24:48.774 "numa_id": -1, 00:24:48.774 "assigned_rate_limits": { 00:24:48.774 "rw_ios_per_sec": 0, 00:24:48.774 "rw_mbytes_per_sec": 0, 00:24:48.774 "r_mbytes_per_sec": 0, 00:24:48.774 "w_mbytes_per_sec": 0 00:24:48.774 }, 00:24:48.774 "claimed": true, 00:24:48.774 "claim_type": "read_many_write_one", 00:24:48.774 "zoned": false, 00:24:48.774 "supported_io_types": { 00:24:48.774 "read": true, 00:24:48.774 "write": true, 00:24:48.774 "unmap": true, 00:24:48.774 "flush": true, 00:24:48.774 "reset": true, 00:24:48.774 "nvme_admin": true, 00:24:48.774 "nvme_io": true, 00:24:48.774 "nvme_io_md": false, 00:24:48.774 "write_zeroes": true, 00:24:48.774 "zcopy": false, 00:24:48.774 "get_zone_info": false, 00:24:48.774 "zone_management": false, 00:24:48.774 "zone_append": false, 00:24:48.774 "compare": true, 00:24:48.774 "compare_and_write": false, 00:24:48.774 "abort": true, 00:24:48.774 "seek_hole": false, 00:24:48.774 "seek_data": false, 00:24:48.774 "copy": true, 00:24:48.774 "nvme_iov_md": false 00:24:48.774 }, 00:24:48.774 "driver_specific": { 00:24:48.774 "nvme": [ 00:24:48.774 { 00:24:48.774 "pci_address": "0000:00:11.0", 00:24:48.774 "trid": { 00:24:48.774 "trtype": "PCIe", 00:24:48.774 "traddr": "0000:00:11.0" 00:24:48.774 }, 00:24:48.774 "ctrlr_data": { 00:24:48.774 "cntlid": 0, 00:24:48.774 "vendor_id": "0x1b36", 00:24:48.774 "model_number": "QEMU NVMe Ctrl", 00:24:48.774 "serial_number": "12341", 00:24:48.774 "firmware_revision": "8.0.0", 00:24:48.774 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:48.774 "oacs": { 00:24:48.774 "security": 0, 00:24:48.774 "format": 1, 00:24:48.774 "firmware": 0, 00:24:48.774 "ns_manage": 1 00:24:48.774 }, 00:24:48.774 "multi_ctrlr": false, 00:24:48.774 "ana_reporting": false 00:24:48.774 }, 00:24:48.774 "vs": { 00:24:48.774 "nvme_version": "1.4" 00:24:48.774 }, 00:24:48.774 "ns_data": { 00:24:48.774 "id": 1, 00:24:48.774 "can_share": false 00:24:48.774 } 00:24:48.774 } 00:24:48.774 ], 00:24:48.774 "mp_policy": "active_passive" 00:24:48.774 } 00:24:48.774 } 00:24:48.774 ]' 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:48.774 11:33:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:48.774 11:33:10 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:48.774 11:33:10 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:48.774 11:33:10 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:48.774 11:33:10 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:48.774 11:33:10 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:49.033 11:33:11 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=dc42cd24-8510-4b91-80b0-68b02e9b8300 00:24:49.033 11:33:11 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:49.033 11:33:11 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc42cd24-8510-4b91-80b0-68b02e9b8300 00:24:49.291 11:33:11 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:49.550 11:33:11 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=6b8dd37b-2a9d-4bac-8ae4-555b3d4af959 00:24:49.550 11:33:11 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6b8dd37b-2a9d-4bac-8ae4-555b3d4af959 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:49.808 11:33:11 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:49.808 11:33:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:49.808 11:33:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:49.808 11:33:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:49.808 11:33:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:49.808 11:33:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:50.374 { 00:24:50.374 "name": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:50.374 "aliases": [ 00:24:50.374 "lvs/nvme0n1p0" 00:24:50.374 ], 00:24:50.374 "product_name": "Logical Volume", 00:24:50.374 "block_size": 4096, 00:24:50.374 "num_blocks": 26476544, 00:24:50.374 "uuid": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:50.374 "assigned_rate_limits": { 00:24:50.374 "rw_ios_per_sec": 0, 00:24:50.374 "rw_mbytes_per_sec": 0, 00:24:50.374 "r_mbytes_per_sec": 0, 00:24:50.374 "w_mbytes_per_sec": 0 00:24:50.374 }, 00:24:50.374 "claimed": false, 00:24:50.374 "zoned": false, 00:24:50.374 "supported_io_types": { 00:24:50.374 "read": true, 00:24:50.374 "write": true, 00:24:50.374 "unmap": true, 00:24:50.374 "flush": false, 00:24:50.374 "reset": true, 00:24:50.374 "nvme_admin": false, 00:24:50.374 "nvme_io": false, 00:24:50.374 "nvme_io_md": false, 00:24:50.374 "write_zeroes": true, 00:24:50.374 "zcopy": false, 00:24:50.374 "get_zone_info": false, 00:24:50.374 "zone_management": false, 00:24:50.374 "zone_append": false, 00:24:50.374 "compare": false, 00:24:50.374 "compare_and_write": false, 00:24:50.374 "abort": false, 00:24:50.374 "seek_hole": true, 00:24:50.374 "seek_data": true, 00:24:50.374 "copy": false, 00:24:50.374 "nvme_iov_md": false 00:24:50.374 }, 00:24:50.374 "driver_specific": { 00:24:50.374 "lvol": { 00:24:50.374 "lvol_store_uuid": "6b8dd37b-2a9d-4bac-8ae4-555b3d4af959", 00:24:50.374 "base_bdev": "nvme0n1", 00:24:50.374 "thin_provision": true, 00:24:50.374 "num_allocated_clusters": 0, 00:24:50.374 "snapshot": false, 00:24:50.374 "clone": false, 00:24:50.374 "esnap_clone": false 00:24:50.374 } 00:24:50.374 } 00:24:50.374 } 00:24:50.374 ]' 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.374 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.374 11:33:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:50.374 11:33:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:50.374 11:33:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:50.633 11:33:12 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:50.633 11:33:12 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:50.633 11:33:12 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:50.633 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:50.633 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.633 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:50.633 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:50.633 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:50.891 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:50.891 { 00:24:50.891 "name": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:50.891 "aliases": [ 00:24:50.891 "lvs/nvme0n1p0" 00:24:50.891 ], 00:24:50.891 "product_name": "Logical Volume", 00:24:50.891 "block_size": 4096, 00:24:50.891 "num_blocks": 26476544, 00:24:50.891 "uuid": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:50.891 "assigned_rate_limits": { 00:24:50.891 "rw_ios_per_sec": 0, 00:24:50.891 "rw_mbytes_per_sec": 0, 00:24:50.891 "r_mbytes_per_sec": 0, 00:24:50.891 "w_mbytes_per_sec": 0 00:24:50.891 }, 00:24:50.891 "claimed": false, 00:24:50.891 "zoned": false, 00:24:50.891 "supported_io_types": { 00:24:50.891 "read": true, 00:24:50.891 "write": true, 00:24:50.891 "unmap": true, 00:24:50.891 "flush": false, 00:24:50.891 "reset": true, 00:24:50.891 "nvme_admin": false, 00:24:50.891 "nvme_io": false, 00:24:50.891 "nvme_io_md": false, 00:24:50.891 "write_zeroes": true, 00:24:50.891 "zcopy": false, 00:24:50.891 "get_zone_info": false, 00:24:50.891 "zone_management": false, 00:24:50.891 "zone_append": false, 00:24:50.891 "compare": false, 00:24:50.891 "compare_and_write": false, 00:24:50.891 "abort": false, 00:24:50.891 "seek_hole": true, 00:24:50.891 "seek_data": true, 00:24:50.891 "copy": false, 00:24:50.891 "nvme_iov_md": false 00:24:50.891 }, 00:24:50.891 "driver_specific": { 00:24:50.892 "lvol": { 00:24:50.892 "lvol_store_uuid": "6b8dd37b-2a9d-4bac-8ae4-555b3d4af959", 00:24:50.892 "base_bdev": "nvme0n1", 00:24:50.892 "thin_provision": true, 00:24:50.892 "num_allocated_clusters": 0, 00:24:50.892 "snapshot": false, 00:24:50.892 "clone": false, 00:24:50.892 "esnap_clone": false 00:24:50.892 } 00:24:50.892 } 00:24:50.892 } 00:24:50.892 ]' 00:24:50.892 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:50.892 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:50.892 11:33:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.892 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.892 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.892 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.892 11:33:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:50.892 11:33:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:51.459 11:33:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:51.459 11:33:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.459 { 00:24:51.459 "name": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:51.459 "aliases": [ 00:24:51.459 "lvs/nvme0n1p0" 00:24:51.459 ], 00:24:51.459 "product_name": "Logical Volume", 00:24:51.459 "block_size": 4096, 00:24:51.459 "num_blocks": 26476544, 00:24:51.459 "uuid": "5dcb3c4a-31ee-40ca-b3fd-3e39973be84b", 00:24:51.459 "assigned_rate_limits": { 00:24:51.459 "rw_ios_per_sec": 0, 00:24:51.459 "rw_mbytes_per_sec": 0, 00:24:51.459 "r_mbytes_per_sec": 0, 00:24:51.459 "w_mbytes_per_sec": 0 00:24:51.459 }, 00:24:51.459 "claimed": false, 00:24:51.459 "zoned": false, 00:24:51.459 "supported_io_types": { 00:24:51.459 "read": true, 00:24:51.459 "write": true, 00:24:51.459 "unmap": true, 00:24:51.459 "flush": false, 00:24:51.459 "reset": true, 00:24:51.459 "nvme_admin": false, 00:24:51.459 "nvme_io": false, 00:24:51.459 "nvme_io_md": false, 00:24:51.459 "write_zeroes": true, 00:24:51.459 "zcopy": false, 00:24:51.459 "get_zone_info": false, 00:24:51.459 "zone_management": false, 00:24:51.459 "zone_append": false, 00:24:51.459 "compare": false, 00:24:51.459 "compare_and_write": false, 00:24:51.459 "abort": false, 00:24:51.459 "seek_hole": true, 00:24:51.459 "seek_data": true, 00:24:51.459 "copy": false, 00:24:51.459 "nvme_iov_md": false 00:24:51.459 }, 00:24:51.459 "driver_specific": { 00:24:51.459 "lvol": { 00:24:51.459 "lvol_store_uuid": "6b8dd37b-2a9d-4bac-8ae4-555b3d4af959", 00:24:51.459 "base_bdev": "nvme0n1", 00:24:51.459 "thin_provision": true, 00:24:51.459 "num_allocated_clusters": 0, 00:24:51.459 "snapshot": false, 00:24:51.459 "clone": false, 00:24:51.459 "esnap_clone": false 00:24:51.459 } 00:24:51.459 } 00:24:51.459 } 00:24:51.459 ]' 00:24:51.459 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.716 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.716 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.716 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:51.716 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:51.716 11:33:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b --l2p_dram_limit 10' 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:51.716 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:51.716 11:33:13 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5dcb3c4a-31ee-40ca-b3fd-3e39973be84b --l2p_dram_limit 10 -c nvc0n1p0 00:24:51.975 [2024-12-10 11:33:13.965487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.965781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.975 [2024-12-10 11:33:13.965823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.975 [2024-12-10 11:33:13.965839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.965933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.965953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.975 [2024-12-10 11:33:13.965970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:51.975 [2024-12-10 11:33:13.965983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.966026] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.975 [2024-12-10 11:33:13.967019] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.975 [2024-12-10 11:33:13.967052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.967067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.975 [2024-12-10 11:33:13.967083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:24:51.975 [2024-12-10 11:33:13.967096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.967229] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:24:51.975 [2024-12-10 11:33:13.968299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.968346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:51.975 [2024-12-10 11:33:13.968365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:51.975 [2024-12-10 11:33:13.968387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.973071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.973123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.975 [2024-12-10 11:33:13.973142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:24:51.975 [2024-12-10 11:33:13.973158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.973287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.973312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.975 [2024-12-10 11:33:13.973326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:51.975 [2024-12-10 11:33:13.973346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.973428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.973455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.975 [2024-12-10 11:33:13.973473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:51.975 [2024-12-10 11:33:13.973488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.973522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.975 [2024-12-10 11:33:13.978097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.978139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.975 [2024-12-10 11:33:13.978163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:24:51.975 [2024-12-10 11:33:13.978176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.978226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-12-10 11:33:13.978243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.975 [2024-12-10 11:33:13.978259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:51.975 [2024-12-10 11:33:13.978272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-12-10 11:33:13.978336] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:51.975 [2024-12-10 11:33:13.978503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.975 [2024-12-10 11:33:13.978527] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.975 [2024-12-10 11:33:13.978545] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.975 [2024-12-10 11:33:13.978563] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.976 [2024-12-10 11:33:13.978578] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.976 [2024-12-10 11:33:13.978593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.976 [2024-12-10 11:33:13.978608] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.976 [2024-12-10 11:33:13.978624] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.976 [2024-12-10 11:33:13.978662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.976 [2024-12-10 11:33:13.978678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-12-10 11:33:13.978703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.976 [2024-12-10 11:33:13.978721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:51.976 [2024-12-10 11:33:13.978734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-12-10 11:33:13.978850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-12-10 11:33:13.978869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.976 [2024-12-10 11:33:13.978885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:51.976 [2024-12-10 11:33:13.978900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-12-10 11:33:13.979016] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.976 [2024-12-10 11:33:13.979034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.976 [2024-12-10 11:33:13.979050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.976 [2024-12-10 11:33:13.979090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.976 [2024-12-10 11:33:13.979130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.976 [2024-12-10 11:33:13.979155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.976 [2024-12-10 11:33:13.979168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.976 [2024-12-10 11:33:13.979184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.976 [2024-12-10 11:33:13.979197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.976 [2024-12-10 11:33:13.979210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.976 [2024-12-10 11:33:13.979222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.976 [2024-12-10 11:33:13.979250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.976 [2024-12-10 11:33:13.979288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.976 [2024-12-10 11:33:13.979328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.976 [2024-12-10 11:33:13.979367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.976 [2024-12-10 11:33:13.979404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.976 [2024-12-10 11:33:13.979444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.976 [2024-12-10 11:33:13.979470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.976 [2024-12-10 11:33:13.979482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.976 [2024-12-10 11:33:13.979495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.976 [2024-12-10 11:33:13.979506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.976 [2024-12-10 11:33:13.979522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.976 [2024-12-10 11:33:13.979534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.976 [2024-12-10 11:33:13.979559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.976 [2024-12-10 11:33:13.979572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979583] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.976 [2024-12-10 11:33:13.979598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.976 [2024-12-10 11:33:13.979611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-12-10 11:33:13.979658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.976 [2024-12-10 11:33:13.979675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.976 [2024-12-10 11:33:13.979687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.976 [2024-12-10 11:33:13.979711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.976 [2024-12-10 11:33:13.979723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.976 [2024-12-10 11:33:13.979746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.976 [2024-12-10 11:33:13.979761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.976 [2024-12-10 11:33:13.979781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.976 [2024-12-10 11:33:13.979796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.976 [2024-12-10 11:33:13.979810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.976 [2024-12-10 11:33:13.979823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.976 [2024-12-10 11:33:13.979837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.976 [2024-12-10 11:33:13.979850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.976 [2024-12-10 11:33:13.979865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.976 [2024-12-10 11:33:13.979877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.976 [2024-12-10 11:33:13.979904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.976 [2024-12-10 11:33:13.979917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.976 [2024-12-10 11:33:13.979935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.976 [2024-12-10 11:33:13.979948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.976 [2024-12-10 11:33:13.979962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.977 [2024-12-10 11:33:13.979974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.977 [2024-12-10 11:33:13.979988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.977 [2024-12-10 11:33:13.980001] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.977 [2024-12-10 11:33:13.980017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.977 [2024-12-10 11:33:13.980030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.977 [2024-12-10 11:33:13.980045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.977 [2024-12-10 11:33:13.980057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.977 [2024-12-10 11:33:13.980071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.977 [2024-12-10 11:33:13.980085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.977 [2024-12-10 11:33:13.980100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.977 [2024-12-10 11:33:13.980113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:24:51.977 [2024-12-10 11:33:13.980127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.977 [2024-12-10 11:33:13.980182] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:51.977 [2024-12-10 11:33:13.980205] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:53.876 [2024-12-10 11:33:15.921102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:15.921186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:53.876 [2024-12-10 11:33:15.921214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1940.929 ms 00:24:53.876 [2024-12-10 11:33:15.921232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:15.959505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:15.959855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.876 [2024-12-10 11:33:15.959907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.961 ms 00:24:53.876 [2024-12-10 11:33:15.959928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:15.960164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:15.960194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:53.876 [2024-12-10 11:33:15.960215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:53.876 [2024-12-10 11:33:15.960235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:16.008607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:16.008692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.876 [2024-12-10 11:33:16.008717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.301 ms 00:24:53.876 [2024-12-10 11:33:16.008738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:16.008806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:16.008829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.876 [2024-12-10 11:33:16.008845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:53.876 [2024-12-10 11:33:16.008876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:16.009291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:16.009319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.876 [2024-12-10 11:33:16.009336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:53.876 [2024-12-10 11:33:16.009352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:16.009515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:16.009541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.876 [2024-12-10 11:33:16.009557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:24:53.876 [2024-12-10 11:33:16.009575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.876 [2024-12-10 11:33:16.030300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.876 [2024-12-10 11:33:16.030368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.876 [2024-12-10 11:33:16.030392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.694 ms 00:24:53.876 [2024-12-10 11:33:16.030410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.058558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.135 [2024-12-10 11:33:16.061647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.061691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.135 [2024-12-10 11:33:16.061719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.073 ms 00:24:54.135 [2024-12-10 11:33:16.061735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.125218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.125298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:54.135 [2024-12-10 11:33:16.125327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.415 ms 00:24:54.135 [2024-12-10 11:33:16.125343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.125619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.125663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.135 [2024-12-10 11:33:16.125688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:24:54.135 [2024-12-10 11:33:16.125714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.164122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.164360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:54.135 [2024-12-10 11:33:16.164407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.308 ms 00:24:54.135 [2024-12-10 11:33:16.164434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.199456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.199669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:54.135 [2024-12-10 11:33:16.199705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.948 ms 00:24:54.135 [2024-12-10 11:33:16.199719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.200514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.200542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.135 [2024-12-10 11:33:16.200564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:24:54.135 [2024-12-10 11:33:16.200577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.135 [2024-12-10 11:33:16.284137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.135 [2024-12-10 11:33:16.284203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:54.135 [2024-12-10 11:33:16.284231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.469 ms 00:24:54.135 [2024-12-10 11:33:16.284245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.393 [2024-12-10 11:33:16.317601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.393 [2024-12-10 11:33:16.317689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:54.393 [2024-12-10 11:33:16.317713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.234 ms 00:24:54.393 [2024-12-10 11:33:16.317727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.393 [2024-12-10 11:33:16.349497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.393 [2024-12-10 11:33:16.349539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:54.393 [2024-12-10 11:33:16.349576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.710 ms 00:24:54.393 [2024-12-10 11:33:16.349589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.393 [2024-12-10 11:33:16.381227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.393 [2024-12-10 11:33:16.381270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.393 [2024-12-10 11:33:16.381308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.566 ms 00:24:54.393 [2024-12-10 11:33:16.381320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.393 [2024-12-10 11:33:16.381378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.393 [2024-12-10 11:33:16.381398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.393 [2024-12-10 11:33:16.381416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:54.393 [2024-12-10 11:33:16.381429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.393 [2024-12-10 11:33:16.381559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.393 [2024-12-10 11:33:16.381583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.393 [2024-12-10 11:33:16.381599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:54.393 [2024-12-10 11:33:16.381611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.394 [2024-12-10 11:33:16.382686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2416.721 ms, result 0 00:24:54.394 { 00:24:54.394 "name": "ftl0", 00:24:54.394 "uuid": "e6d53703-a958-44d8-b655-6b7e2fd3fe66" 00:24:54.394 } 00:24:54.394 11:33:16 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:54.394 11:33:16 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:54.652 11:33:16 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:54.652 11:33:16 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:54.910 [2024-12-10 11:33:16.902250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.902336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:54.910 [2024-12-10 11:33:16.902358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.910 [2024-12-10 11:33:16.902373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.902412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.910 [2024-12-10 11:33:16.905805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.905841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:54.910 [2024-12-10 11:33:16.905862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:24:54.910 [2024-12-10 11:33:16.905875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.906202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.906222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:54.910 [2024-12-10 11:33:16.906239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:24:54.910 [2024-12-10 11:33:16.906252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.909559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.909591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:54.910 [2024-12-10 11:33:16.909610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:24:54.910 [2024-12-10 11:33:16.909623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.916321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.916356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:54.910 [2024-12-10 11:33:16.916375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.652 ms 00:24:54.910 [2024-12-10 11:33:16.916387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.947557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.947757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:54.910 [2024-12-10 11:33:16.947794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.082 ms 00:24:54.910 [2024-12-10 11:33:16.947808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.910 [2024-12-10 11:33:16.966196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.910 [2024-12-10 11:33:16.966252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:54.910 [2024-12-10 11:33:16.966275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:24:54.911 [2024-12-10 11:33:16.966289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.911 [2024-12-10 11:33:16.966490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.911 [2024-12-10 11:33:16.966512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:54.911 [2024-12-10 11:33:16.966529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:54.911 [2024-12-10 11:33:16.966545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.911 [2024-12-10 11:33:16.998362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.911 [2024-12-10 11:33:16.998447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:54.911 [2024-12-10 11:33:16.998471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.773 ms 00:24:54.911 [2024-12-10 11:33:16.998485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.911 [2024-12-10 11:33:17.029871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.911 [2024-12-10 11:33:17.030098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:54.911 [2024-12-10 11:33:17.030137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.289 ms 00:24:54.911 [2024-12-10 11:33:17.030151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.911 [2024-12-10 11:33:17.060861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.911 [2024-12-10 11:33:17.061070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:54.911 [2024-12-10 11:33:17.061107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.638 ms 00:24:54.911 [2024-12-10 11:33:17.061122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.170 [2024-12-10 11:33:17.092118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.170 [2024-12-10 11:33:17.092182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.170 [2024-12-10 11:33:17.092206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.835 ms 00:24:55.170 [2024-12-10 11:33:17.092218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.170 [2024-12-10 11:33:17.092276] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.170 [2024-12-10 11:33:17.092305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.092997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.093010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.093024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.170 [2024-12-10 11:33:17.093037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.171 [2024-12-10 11:33:17.093781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.171 [2024-12-10 11:33:17.093796] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:24:55.171 [2024-12-10 11:33:17.093809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:55.171 [2024-12-10 11:33:17.093829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:55.171 [2024-12-10 11:33:17.093840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:55.171 [2024-12-10 11:33:17.093855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:55.171 [2024-12-10 11:33:17.093867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.171 [2024-12-10 11:33:17.093881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.171 [2024-12-10 11:33:17.093893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.171 [2024-12-10 11:33:17.093906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.171 [2024-12-10 11:33:17.093917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.171 [2024-12-10 11:33:17.093932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.171 [2024-12-10 11:33:17.093944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.171 [2024-12-10 11:33:17.093960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:24:55.171 [2024-12-10 11:33:17.093974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.110669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.171 [2024-12-10 11:33:17.110859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.171 [2024-12-10 11:33:17.110895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.618 ms 00:24:55.171 [2024-12-10 11:33:17.110910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.111356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.171 [2024-12-10 11:33:17.111381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.171 [2024-12-10 11:33:17.111398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:24:55.171 [2024-12-10 11:33:17.111411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.167154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.171 [2024-12-10 11:33:17.167225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.171 [2024-12-10 11:33:17.167266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.171 [2024-12-10 11:33:17.167279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.167372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.171 [2024-12-10 11:33:17.167392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.171 [2024-12-10 11:33:17.167408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.171 [2024-12-10 11:33:17.167421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.167582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.171 [2024-12-10 11:33:17.167603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.171 [2024-12-10 11:33:17.167620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.171 [2024-12-10 11:33:17.167632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.167692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.171 [2024-12-10 11:33:17.167709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.171 [2024-12-10 11:33:17.167728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.171 [2024-12-10 11:33:17.167741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.171 [2024-12-10 11:33:17.271402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.171 [2024-12-10 11:33:17.271695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.171 [2024-12-10 11:33:17.271732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.171 [2024-12-10 11:33:17.271747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.430 [2024-12-10 11:33:17.357254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.357267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.430 [2024-12-10 11:33:17.357447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.357459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.430 [2024-12-10 11:33:17.357567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.357581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.430 [2024-12-10 11:33:17.357788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.357800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:55.430 [2024-12-10 11:33:17.357896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.357908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.357962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.357978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.430 [2024-12-10 11:33:17.357993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.358005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.358065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.430 [2024-12-10 11:33:17.358083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.430 [2024-12-10 11:33:17.358098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.430 [2024-12-10 11:33:17.358110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.430 [2024-12-10 11:33:17.358273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.982 ms, result 0 00:24:55.430 true 00:24:55.430 11:33:17 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79237 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79237 ']' 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79237 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79237 00:24:55.430 killing process with pid 79237 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79237' 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79237 00:24:55.430 11:33:17 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79237 00:24:58.714 11:33:20 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:03.986 262144+0 records in 00:25:03.986 262144+0 records out 00:25:03.986 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.96303 s, 216 MB/s 00:25:03.986 11:33:25 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:05.885 11:33:27 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:05.885 [2024-12-10 11:33:27.733404] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:05.885 [2024-12-10 11:33:27.733572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79475 ] 00:25:05.885 [2024-12-10 11:33:27.912380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.885 [2024-12-10 11:33:28.019652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.454 [2024-12-10 11:33:28.352969] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:06.454 [2024-12-10 11:33:28.353046] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:06.454 [2024-12-10 11:33:28.520437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.520511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:06.454 [2024-12-10 11:33:28.520533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:06.454 [2024-12-10 11:33:28.520546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.520648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.520678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.454 [2024-12-10 11:33:28.520692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:06.454 [2024-12-10 11:33:28.520704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.520738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:06.454 [2024-12-10 11:33:28.521677] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:06.454 [2024-12-10 11:33:28.521861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.521881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.454 [2024-12-10 11:33:28.521895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:25:06.454 [2024-12-10 11:33:28.521906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.523085] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:06.454 [2024-12-10 11:33:28.539751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.539799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:06.454 [2024-12-10 11:33:28.539819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.667 ms 00:25:06.454 [2024-12-10 11:33:28.539831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.539936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.539957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:06.454 [2024-12-10 11:33:28.539971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:06.454 [2024-12-10 11:33:28.539982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.544311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.544359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.454 [2024-12-10 11:33:28.544377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:25:06.454 [2024-12-10 11:33:28.544403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.544530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.544551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.454 [2024-12-10 11:33:28.544564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:06.454 [2024-12-10 11:33:28.544575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.544666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.544685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:06.454 [2024-12-10 11:33:28.544698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:06.454 [2024-12-10 11:33:28.544709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.544761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:06.454 [2024-12-10 11:33:28.549016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.549055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.454 [2024-12-10 11:33:28.549082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:25:06.454 [2024-12-10 11:33:28.549094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.549148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.549167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:06.454 [2024-12-10 11:33:28.549180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:06.454 [2024-12-10 11:33:28.549191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.549237] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:06.454 [2024-12-10 11:33:28.549277] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:06.454 [2024-12-10 11:33:28.549322] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:06.454 [2024-12-10 11:33:28.549352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:06.454 [2024-12-10 11:33:28.549464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:06.454 [2024-12-10 11:33:28.549480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:06.454 [2024-12-10 11:33:28.549494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:06.454 [2024-12-10 11:33:28.549509] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:06.454 [2024-12-10 11:33:28.549522] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:06.454 [2024-12-10 11:33:28.549535] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:06.454 [2024-12-10 11:33:28.549546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:06.454 [2024-12-10 11:33:28.549567] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:06.454 [2024-12-10 11:33:28.549578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:06.454 [2024-12-10 11:33:28.549590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.549601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:06.454 [2024-12-10 11:33:28.549614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:25:06.454 [2024-12-10 11:33:28.549624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-12-10 11:33:28.549755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-12-10 11:33:28.549771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:06.454 [2024-12-10 11:33:28.549784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:06.455 [2024-12-10 11:33:28.549794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-12-10 11:33:28.549920] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:06.455 [2024-12-10 11:33:28.549946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:06.455 [2024-12-10 11:33:28.549960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.455 [2024-12-10 11:33:28.549972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.549984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:06.455 [2024-12-10 11:33:28.549995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:06.455 [2024-12-10 11:33:28.550027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.455 [2024-12-10 11:33:28.550048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:06.455 [2024-12-10 11:33:28.550059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:06.455 [2024-12-10 11:33:28.550069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.455 [2024-12-10 11:33:28.550101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:06.455 [2024-12-10 11:33:28.550113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:06.455 [2024-12-10 11:33:28.550125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:06.455 [2024-12-10 11:33:28.550146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:06.455 [2024-12-10 11:33:28.550177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:06.455 [2024-12-10 11:33:28.550209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:06.455 [2024-12-10 11:33:28.550240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:06.455 [2024-12-10 11:33:28.550271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:06.455 [2024-12-10 11:33:28.550301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.455 [2024-12-10 11:33:28.550321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:06.455 [2024-12-10 11:33:28.550331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:06.455 [2024-12-10 11:33:28.550341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.455 [2024-12-10 11:33:28.550352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:06.455 [2024-12-10 11:33:28.550362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:06.455 [2024-12-10 11:33:28.550372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:06.455 [2024-12-10 11:33:28.550392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:06.455 [2024-12-10 11:33:28.550402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:06.455 [2024-12-10 11:33:28.550423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:06.455 [2024-12-10 11:33:28.550434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.455 [2024-12-10 11:33:28.550457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:06.455 [2024-12-10 11:33:28.550468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:06.455 [2024-12-10 11:33:28.550478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:06.455 [2024-12-10 11:33:28.550488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:06.455 [2024-12-10 11:33:28.550498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:06.455 [2024-12-10 11:33:28.550508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:06.455 [2024-12-10 11:33:28.550521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:06.455 [2024-12-10 11:33:28.550535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:06.455 [2024-12-10 11:33:28.550573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:06.455 [2024-12-10 11:33:28.550584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:06.455 [2024-12-10 11:33:28.550595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:06.455 [2024-12-10 11:33:28.550606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:06.455 [2024-12-10 11:33:28.550617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:06.455 [2024-12-10 11:33:28.550642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:06.455 [2024-12-10 11:33:28.550656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:06.455 [2024-12-10 11:33:28.550667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:06.455 [2024-12-10 11:33:28.550679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:06.455 [2024-12-10 11:33:28.550734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:06.455 [2024-12-10 11:33:28.550746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:06.455 [2024-12-10 11:33:28.550770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:06.455 [2024-12-10 11:33:28.550781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:06.455 [2024-12-10 11:33:28.550792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:06.455 [2024-12-10 11:33:28.550804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-12-10 11:33:28.550816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:06.455 [2024-12-10 11:33:28.550828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:25:06.455 [2024-12-10 11:33:28.550839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-12-10 11:33:28.584806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-12-10 11:33:28.584868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.455 [2024-12-10 11:33:28.584891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.869 ms 00:25:06.455 [2024-12-10 11:33:28.584909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-12-10 11:33:28.585023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-12-10 11:33:28.585040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:06.455 [2024-12-10 11:33:28.585053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:06.455 [2024-12-10 11:33:28.585065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.640081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.640325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.715 [2024-12-10 11:33:28.640357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.914 ms 00:25:06.715 [2024-12-10 11:33:28.640371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.640457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.640477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.715 [2024-12-10 11:33:28.640508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:06.715 [2024-12-10 11:33:28.640520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.640959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.640979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.715 [2024-12-10 11:33:28.640993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:25:06.715 [2024-12-10 11:33:28.641005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.641176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.641197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.715 [2024-12-10 11:33:28.641220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:25:06.715 [2024-12-10 11:33:28.641231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.658819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.659025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.715 [2024-12-10 11:33:28.659056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.557 ms 00:25:06.715 [2024-12-10 11:33:28.659071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.675586] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:06.715 [2024-12-10 11:33:28.675647] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:06.715 [2024-12-10 11:33:28.675670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.675683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:06.715 [2024-12-10 11:33:28.675697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.424 ms 00:25:06.715 [2024-12-10 11:33:28.675708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.705534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.705608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:06.715 [2024-12-10 11:33:28.705643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.776 ms 00:25:06.715 [2024-12-10 11:33:28.705660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.721555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.721760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:06.715 [2024-12-10 11:33:28.721791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.834 ms 00:25:06.715 [2024-12-10 11:33:28.721805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.737675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.737736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:06.715 [2024-12-10 11:33:28.737755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.819 ms 00:25:06.715 [2024-12-10 11:33:28.737766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.738566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.738590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:06.715 [2024-12-10 11:33:28.738615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:25:06.715 [2024-12-10 11:33:28.738661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.812812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.812887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:06.715 [2024-12-10 11:33:28.812908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.122 ms 00:25:06.715 [2024-12-10 11:33:28.812937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.825812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:06.715 [2024-12-10 11:33:28.828653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.828720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:06.715 [2024-12-10 11:33:28.828739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.632 ms 00:25:06.715 [2024-12-10 11:33:28.828752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.828877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.828899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:06.715 [2024-12-10 11:33:28.828913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:06.715 [2024-12-10 11:33:28.828925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.829029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.829048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:06.715 [2024-12-10 11:33:28.829062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:06.715 [2024-12-10 11:33:28.829073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.829104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.829119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:06.715 [2024-12-10 11:33:28.829131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:06.715 [2024-12-10 11:33:28.829142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.829195] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:06.715 [2024-12-10 11:33:28.829222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.829234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:06.715 [2024-12-10 11:33:28.829246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:06.715 [2024-12-10 11:33:28.829258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.860161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.860430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:06.715 [2024-12-10 11:33:28.860460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.877 ms 00:25:06.715 [2024-12-10 11:33:28.860491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.715 [2024-12-10 11:33:28.860579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.715 [2024-12-10 11:33:28.860598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:06.715 [2024-12-10 11:33:28.860611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:06.716 [2024-12-10 11:33:28.860622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.716 [2024-12-10 11:33:28.861958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.941 ms, result 0 00:25:08.091  [2024-12-10T11:33:31.194Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-10T11:33:32.130Z] Copying: 52/1024 [MB] (25 MBps) [2024-12-10T11:33:33.066Z] Copying: 76/1024 [MB] (24 MBps) [2024-12-10T11:33:34.002Z] Copying: 105/1024 [MB] (28 MBps) [2024-12-10T11:33:34.937Z] Copying: 132/1024 [MB] (27 MBps) [2024-12-10T11:33:36.312Z] Copying: 162/1024 [MB] (30 MBps) [2024-12-10T11:33:36.879Z] Copying: 192/1024 [MB] (29 MBps) [2024-12-10T11:33:38.256Z] Copying: 219/1024 [MB] (27 MBps) [2024-12-10T11:33:39.189Z] Copying: 249/1024 [MB] (29 MBps) [2024-12-10T11:33:40.123Z] Copying: 278/1024 [MB] (29 MBps) [2024-12-10T11:33:41.062Z] Copying: 305/1024 [MB] (26 MBps) [2024-12-10T11:33:42.009Z] Copying: 332/1024 [MB] (27 MBps) [2024-12-10T11:33:42.943Z] Copying: 360/1024 [MB] (27 MBps) [2024-12-10T11:33:43.877Z] Copying: 387/1024 [MB] (27 MBps) [2024-12-10T11:33:45.251Z] Copying: 414/1024 [MB] (27 MBps) [2024-12-10T11:33:46.184Z] Copying: 441/1024 [MB] (27 MBps) [2024-12-10T11:33:47.119Z] Copying: 468/1024 [MB] (26 MBps) [2024-12-10T11:33:48.054Z] Copying: 494/1024 [MB] (25 MBps) [2024-12-10T11:33:48.989Z] Copying: 520/1024 [MB] (26 MBps) [2024-12-10T11:33:49.963Z] Copying: 547/1024 [MB] (26 MBps) [2024-12-10T11:33:50.899Z] Copying: 573/1024 [MB] (26 MBps) [2024-12-10T11:33:52.276Z] Copying: 601/1024 [MB] (27 MBps) [2024-12-10T11:33:53.211Z] Copying: 627/1024 [MB] (26 MBps) [2024-12-10T11:33:54.147Z] Copying: 655/1024 [MB] (27 MBps) [2024-12-10T11:33:55.084Z] Copying: 680/1024 [MB] (25 MBps) [2024-12-10T11:33:56.018Z] Copying: 707/1024 [MB] (26 MBps) [2024-12-10T11:33:56.955Z] Copying: 734/1024 [MB] (26 MBps) [2024-12-10T11:33:57.891Z] Copying: 760/1024 [MB] (26 MBps) [2024-12-10T11:33:59.268Z] Copying: 787/1024 [MB] (27 MBps) [2024-12-10T11:34:00.201Z] Copying: 814/1024 [MB] (27 MBps) [2024-12-10T11:34:01.137Z] Copying: 842/1024 [MB] (27 MBps) [2024-12-10T11:34:02.075Z] Copying: 869/1024 [MB] (26 MBps) [2024-12-10T11:34:03.012Z] Copying: 896/1024 [MB] (26 MBps) [2024-12-10T11:34:03.948Z] Copying: 922/1024 [MB] (26 MBps) [2024-12-10T11:34:04.884Z] Copying: 949/1024 [MB] (26 MBps) [2024-12-10T11:34:06.263Z] Copying: 977/1024 [MB] (28 MBps) [2024-12-10T11:34:06.831Z] Copying: 1002/1024 [MB] (25 MBps) [2024-12-10T11:34:06.831Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-10 11:34:06.664649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.664705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.664 [2024-12-10 11:34:06.664725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:44.664 [2024-12-10 11:34:06.664737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.664 [2024-12-10 11:34:06.664769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.664 [2024-12-10 11:34:06.668129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.668169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.664 [2024-12-10 11:34:06.668201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:25:44.664 [2024-12-10 11:34:06.668213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.664 [2024-12-10 11:34:06.669781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.669826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.664 [2024-12-10 11:34:06.669843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:25:44.664 [2024-12-10 11:34:06.669855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.664 [2024-12-10 11:34:06.686903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.687091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.664 [2024-12-10 11:34:06.687122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.024 ms 00:25:44.664 [2024-12-10 11:34:06.687135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.664 [2024-12-10 11:34:06.693840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.693996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.664 [2024-12-10 11:34:06.694025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:25:44.664 [2024-12-10 11:34:06.694038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.664 [2024-12-10 11:34:06.725507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.664 [2024-12-10 11:34:06.725711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.664 [2024-12-10 11:34:06.725742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.404 ms 00:25:44.665 [2024-12-10 11:34:06.725756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.665 [2024-12-10 11:34:06.743815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.665 [2024-12-10 11:34:06.743870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.665 [2024-12-10 11:34:06.743890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.006 ms 00:25:44.665 [2024-12-10 11:34:06.743902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.665 [2024-12-10 11:34:06.744086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.665 [2024-12-10 11:34:06.744117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.665 [2024-12-10 11:34:06.744131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:44.665 [2024-12-10 11:34:06.744143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.665 [2024-12-10 11:34:06.775897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.665 [2024-12-10 11:34:06.776101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.665 [2024-12-10 11:34:06.776131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.725 ms 00:25:44.665 [2024-12-10 11:34:06.776144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.665 [2024-12-10 11:34:06.807498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.665 [2024-12-10 11:34:06.807713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.665 [2024-12-10 11:34:06.807745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.300 ms 00:25:44.665 [2024-12-10 11:34:06.807758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.924 [2024-12-10 11:34:06.838689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.924 [2024-12-10 11:34:06.838886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.924 [2024-12-10 11:34:06.838916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.877 ms 00:25:44.924 [2024-12-10 11:34:06.838929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.924 [2024-12-10 11:34:06.869975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.924 [2024-12-10 11:34:06.870159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.924 [2024-12-10 11:34:06.870189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.935 ms 00:25:44.924 [2024-12-10 11:34:06.870202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.924 [2024-12-10 11:34:06.870257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.925 [2024-12-10 11:34:06.870282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.870991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.925 [2024-12-10 11:34:06.871399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.926 [2024-12-10 11:34:06.871692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.926 [2024-12-10 11:34:06.871718] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:25:44.926 [2024-12-10 11:34:06.871731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:44.926 [2024-12-10 11:34:06.871741] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:44.926 [2024-12-10 11:34:06.871752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:44.926 [2024-12-10 11:34:06.871764] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:44.926 [2024-12-10 11:34:06.871792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.926 [2024-12-10 11:34:06.871820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.926 [2024-12-10 11:34:06.871832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.926 [2024-12-10 11:34:06.871842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.926 [2024-12-10 11:34:06.871852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.926 [2024-12-10 11:34:06.871864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.926 [2024-12-10 11:34:06.871875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.926 [2024-12-10 11:34:06.871888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:25:44.926 [2024-12-10 11:34:06.871899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.889756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.926 [2024-12-10 11:34:06.889829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.926 [2024-12-10 11:34:06.889850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.776 ms 00:25:44.926 [2024-12-10 11:34:06.889862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.890319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.926 [2024-12-10 11:34:06.890351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.926 [2024-12-10 11:34:06.890367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:25:44.926 [2024-12-10 11:34:06.890403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.935670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.926 [2024-12-10 11:34:06.935757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.926 [2024-12-10 11:34:06.935776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.926 [2024-12-10 11:34:06.935788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.935870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.926 [2024-12-10 11:34:06.935886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.926 [2024-12-10 11:34:06.935898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.926 [2024-12-10 11:34:06.935929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.936038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.926 [2024-12-10 11:34:06.936059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.926 [2024-12-10 11:34:06.936072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.926 [2024-12-10 11:34:06.936083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:06.936105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.926 [2024-12-10 11:34:06.936119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.926 [2024-12-10 11:34:06.936131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.926 [2024-12-10 11:34:06.936142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.926 [2024-12-10 11:34:07.039689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.926 [2024-12-10 11:34:07.039756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.926 [2024-12-10 11:34:07.039776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.926 [2024-12-10 11:34:07.039788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.185 [2024-12-10 11:34:07.124286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.185 [2024-12-10 11:34:07.124440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.185 [2024-12-10 11:34:07.124526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.185 [2024-12-10 11:34:07.124724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.185 [2024-12-10 11:34:07.124821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.124900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.185 [2024-12-10 11:34:07.124913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.124924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.124976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.185 [2024-12-10 11:34:07.125000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.185 [2024-12-10 11:34:07.125012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.185 [2024-12-10 11:34:07.125024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.185 [2024-12-10 11:34:07.125172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.528 ms, result 0 00:25:46.142 00:25:46.142 00:25:46.143 11:34:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:46.143 [2024-12-10 11:34:08.280109] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:46.143 [2024-12-10 11:34:08.280465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79872 ] 00:25:46.401 [2024-12-10 11:34:08.454906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.401 [2024-12-10 11:34:08.558342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.970 [2024-12-10 11:34:08.882380] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.970 [2024-12-10 11:34:08.882462] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.970 [2024-12-10 11:34:09.043814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.043886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:46.970 [2024-12-10 11:34:09.043938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.970 [2024-12-10 11:34:09.043952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.044026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.044049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.970 [2024-12-10 11:34:09.044062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:46.970 [2024-12-10 11:34:09.044074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.044107] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:46.970 [2024-12-10 11:34:09.045154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:46.970 [2024-12-10 11:34:09.045204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.045219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.970 [2024-12-10 11:34:09.045232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:25:46.970 [2024-12-10 11:34:09.045243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.046399] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:46.970 [2024-12-10 11:34:09.063036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.063086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:46.970 [2024-12-10 11:34:09.063122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.639 ms 00:25:46.970 [2024-12-10 11:34:09.063134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.063238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.063259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:46.970 [2024-12-10 11:34:09.063272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:46.970 [2024-12-10 11:34:09.063283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.068224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.068293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.970 [2024-12-10 11:34:09.068312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.832 ms 00:25:46.970 [2024-12-10 11:34:09.068333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.068452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.068473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.970 [2024-12-10 11:34:09.068487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:46.970 [2024-12-10 11:34:09.068498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.068594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.068613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:46.970 [2024-12-10 11:34:09.068625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:46.970 [2024-12-10 11:34:09.068636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.068720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:46.970 [2024-12-10 11:34:09.073081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.073123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.970 [2024-12-10 11:34:09.073146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:25:46.970 [2024-12-10 11:34:09.073158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.073211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.073228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:46.970 [2024-12-10 11:34:09.073240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:46.970 [2024-12-10 11:34:09.073251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.073304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:46.970 [2024-12-10 11:34:09.073337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:46.970 [2024-12-10 11:34:09.073382] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:46.970 [2024-12-10 11:34:09.073407] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:46.970 [2024-12-10 11:34:09.073521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:46.970 [2024-12-10 11:34:09.073537] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:46.970 [2024-12-10 11:34:09.073552] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:46.970 [2024-12-10 11:34:09.073567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:46.970 [2024-12-10 11:34:09.073580] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:46.970 [2024-12-10 11:34:09.073592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:46.970 [2024-12-10 11:34:09.073603] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:46.970 [2024-12-10 11:34:09.073619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:46.970 [2024-12-10 11:34:09.073653] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:46.970 [2024-12-10 11:34:09.073668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.073680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:46.970 [2024-12-10 11:34:09.073693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:25:46.970 [2024-12-10 11:34:09.073704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.073847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.970 [2024-12-10 11:34:09.073866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:46.970 [2024-12-10 11:34:09.073878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:46.970 [2024-12-10 11:34:09.073889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.970 [2024-12-10 11:34:09.074007] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:46.970 [2024-12-10 11:34:09.074032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:46.970 [2024-12-10 11:34:09.074053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.970 [2024-12-10 11:34:09.074072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.970 [2024-12-10 11:34:09.074093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:46.970 [2024-12-10 11:34:09.074112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:46.971 [2024-12-10 11:34:09.074145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.971 [2024-12-10 11:34:09.074166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:46.971 [2024-12-10 11:34:09.074176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:46.971 [2024-12-10 11:34:09.074185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.971 [2024-12-10 11:34:09.074212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:46.971 [2024-12-10 11:34:09.074224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:46.971 [2024-12-10 11:34:09.074236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:46.971 [2024-12-10 11:34:09.074257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:46.971 [2024-12-10 11:34:09.074288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:46.971 [2024-12-10 11:34:09.074319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:46.971 [2024-12-10 11:34:09.074348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:46.971 [2024-12-10 11:34:09.074378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:46.971 [2024-12-10 11:34:09.074408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.971 [2024-12-10 11:34:09.074430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:46.971 [2024-12-10 11:34:09.074447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:46.971 [2024-12-10 11:34:09.074466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.971 [2024-12-10 11:34:09.074485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:46.971 [2024-12-10 11:34:09.074507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:46.971 [2024-12-10 11:34:09.074525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:46.971 [2024-12-10 11:34:09.074557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:46.971 [2024-12-10 11:34:09.074567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074577] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:46.971 [2024-12-10 11:34:09.074588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:46.971 [2024-12-10 11:34:09.074599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.971 [2024-12-10 11:34:09.074622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:46.971 [2024-12-10 11:34:09.074651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:46.971 [2024-12-10 11:34:09.074662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:46.971 [2024-12-10 11:34:09.074673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:46.971 [2024-12-10 11:34:09.074683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:46.971 [2024-12-10 11:34:09.074694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:46.971 [2024-12-10 11:34:09.074706] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:46.971 [2024-12-10 11:34:09.074725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:46.971 [2024-12-10 11:34:09.074779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:46.971 [2024-12-10 11:34:09.074799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:46.971 [2024-12-10 11:34:09.074811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:46.971 [2024-12-10 11:34:09.074822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:46.971 [2024-12-10 11:34:09.074833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:46.971 [2024-12-10 11:34:09.074844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:46.971 [2024-12-10 11:34:09.074855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:46.971 [2024-12-10 11:34:09.074866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:46.971 [2024-12-10 11:34:09.074877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:46.971 [2024-12-10 11:34:09.074933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:46.971 [2024-12-10 11:34:09.074945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:46.971 [2024-12-10 11:34:09.074968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:46.971 [2024-12-10 11:34:09.074979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:46.971 [2024-12-10 11:34:09.074995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:46.971 [2024-12-10 11:34:09.075017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.971 [2024-12-10 11:34:09.075038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:46.971 [2024-12-10 11:34:09.075059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:25:46.971 [2024-12-10 11:34:09.075077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.971 [2024-12-10 11:34:09.108719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.971 [2024-12-10 11:34:09.108787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.971 [2024-12-10 11:34:09.108810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.570 ms 00:25:46.971 [2024-12-10 11:34:09.108829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.971 [2024-12-10 11:34:09.108949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.971 [2024-12-10 11:34:09.108966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.971 [2024-12-10 11:34:09.108979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:46.971 [2024-12-10 11:34:09.108991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.172805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.231 [2024-12-10 11:34:09.173013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.231 [2024-12-10 11:34:09.173046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.710 ms 00:25:47.231 [2024-12-10 11:34:09.173059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.173137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.231 [2024-12-10 11:34:09.173153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.231 [2024-12-10 11:34:09.173174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:47.231 [2024-12-10 11:34:09.173187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.173593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.231 [2024-12-10 11:34:09.173613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.231 [2024-12-10 11:34:09.173651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:47.231 [2024-12-10 11:34:09.173667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.173828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.231 [2024-12-10 11:34:09.173848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.231 [2024-12-10 11:34:09.173866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:47.231 [2024-12-10 11:34:09.173878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.190497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.231 [2024-12-10 11:34:09.190554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.231 [2024-12-10 11:34:09.190579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.590 ms 00:25:47.231 [2024-12-10 11:34:09.190591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.231 [2024-12-10 11:34:09.207020] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:47.231 [2024-12-10 11:34:09.207079] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:47.231 [2024-12-10 11:34:09.207100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.207112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:47.232 [2024-12-10 11:34:09.207127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.323 ms 00:25:47.232 [2024-12-10 11:34:09.207138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.237043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.237123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:47.232 [2024-12-10 11:34:09.237145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.839 ms 00:25:47.232 [2024-12-10 11:34:09.237157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.254145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.254232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:47.232 [2024-12-10 11:34:09.254254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.877 ms 00:25:47.232 [2024-12-10 11:34:09.254266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.270578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.270690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:47.232 [2024-12-10 11:34:09.270714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.221 ms 00:25:47.232 [2024-12-10 11:34:09.270727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.271660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.271698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.232 [2024-12-10 11:34:09.271721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:25:47.232 [2024-12-10 11:34:09.271732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.347692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.347784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:47.232 [2024-12-10 11:34:09.347823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.929 ms 00:25:47.232 [2024-12-10 11:34:09.347835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.361129] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:47.232 [2024-12-10 11:34:09.363881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.363925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:47.232 [2024-12-10 11:34:09.363949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.938 ms 00:25:47.232 [2024-12-10 11:34:09.363962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.364087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.364109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:47.232 [2024-12-10 11:34:09.364128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:47.232 [2024-12-10 11:34:09.364140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.364236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.364257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.232 [2024-12-10 11:34:09.364269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:47.232 [2024-12-10 11:34:09.364281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.364313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.364328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.232 [2024-12-10 11:34:09.364340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.232 [2024-12-10 11:34:09.364351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.364398] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:47.232 [2024-12-10 11:34:09.364415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.364426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:47.232 [2024-12-10 11:34:09.364438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:47.232 [2024-12-10 11:34:09.364449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.396328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.396383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.232 [2024-12-10 11:34:09.396409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.852 ms 00:25:47.232 [2024-12-10 11:34:09.396422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.232 [2024-12-10 11:34:09.396512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.232 [2024-12-10 11:34:09.396531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.232 [2024-12-10 11:34:09.396545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:47.232 [2024-12-10 11:34:09.396556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.491 [2024-12-10 11:34:09.397757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.426 ms, result 0 00:25:48.869  [2024-12-10T11:34:11.972Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-10T11:34:12.909Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-10T11:34:13.846Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-10T11:34:14.784Z] Copying: 102/1024 [MB] (25 MBps) [2024-12-10T11:34:15.719Z] Copying: 128/1024 [MB] (25 MBps) [2024-12-10T11:34:16.654Z] Copying: 152/1024 [MB] (24 MBps) [2024-12-10T11:34:18.031Z] Copying: 178/1024 [MB] (26 MBps) [2024-12-10T11:34:18.966Z] Copying: 204/1024 [MB] (26 MBps) [2024-12-10T11:34:19.903Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-10T11:34:20.839Z] Copying: 256/1024 [MB] (26 MBps) [2024-12-10T11:34:21.775Z] Copying: 282/1024 [MB] (25 MBps) [2024-12-10T11:34:22.712Z] Copying: 307/1024 [MB] (25 MBps) [2024-12-10T11:34:23.649Z] Copying: 334/1024 [MB] (26 MBps) [2024-12-10T11:34:24.635Z] Copying: 360/1024 [MB] (26 MBps) [2024-12-10T11:34:26.019Z] Copying: 386/1024 [MB] (25 MBps) [2024-12-10T11:34:26.957Z] Copying: 411/1024 [MB] (25 MBps) [2024-12-10T11:34:27.893Z] Copying: 436/1024 [MB] (25 MBps) [2024-12-10T11:34:28.828Z] Copying: 461/1024 [MB] (25 MBps) [2024-12-10T11:34:29.763Z] Copying: 487/1024 [MB] (25 MBps) [2024-12-10T11:34:30.697Z] Copying: 512/1024 [MB] (25 MBps) [2024-12-10T11:34:31.634Z] Copying: 536/1024 [MB] (24 MBps) [2024-12-10T11:34:32.632Z] Copying: 560/1024 [MB] (23 MBps) [2024-12-10T11:34:34.009Z] Copying: 584/1024 [MB] (24 MBps) [2024-12-10T11:34:34.946Z] Copying: 609/1024 [MB] (25 MBps) [2024-12-10T11:34:35.882Z] Copying: 634/1024 [MB] (24 MBps) [2024-12-10T11:34:36.818Z] Copying: 661/1024 [MB] (26 MBps) [2024-12-10T11:34:37.753Z] Copying: 685/1024 [MB] (24 MBps) [2024-12-10T11:34:38.688Z] Copying: 711/1024 [MB] (26 MBps) [2024-12-10T11:34:39.624Z] Copying: 738/1024 [MB] (26 MBps) [2024-12-10T11:34:40.997Z] Copying: 765/1024 [MB] (26 MBps) [2024-12-10T11:34:41.935Z] Copying: 791/1024 [MB] (26 MBps) [2024-12-10T11:34:42.868Z] Copying: 818/1024 [MB] (26 MBps) [2024-12-10T11:34:43.802Z] Copying: 843/1024 [MB] (25 MBps) [2024-12-10T11:34:44.736Z] Copying: 869/1024 [MB] (26 MBps) [2024-12-10T11:34:45.670Z] Copying: 893/1024 [MB] (23 MBps) [2024-12-10T11:34:47.044Z] Copying: 916/1024 [MB] (22 MBps) [2024-12-10T11:34:47.978Z] Copying: 940/1024 [MB] (24 MBps) [2024-12-10T11:34:48.913Z] Copying: 963/1024 [MB] (23 MBps) [2024-12-10T11:34:49.849Z] Copying: 987/1024 [MB] (23 MBps) [2024-12-10T11:34:50.416Z] Copying: 1011/1024 [MB] (24 MBps) [2024-12-10T11:34:51.351Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-10 11:34:50.982760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:50.983168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:29.184 [2024-12-10 11:34:50.983332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:29.184 [2024-12-10 11:34:50.983471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:50.983576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:29.184 [2024-12-10 11:34:50.988897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:50.989090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:29.184 [2024-12-10 11:34:50.989224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.120 ms 00:26:29.184 [2024-12-10 11:34:50.989466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:50.989775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:50.989952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:29.184 [2024-12-10 11:34:50.990096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:26:29.184 [2024-12-10 11:34:50.990212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:50.993881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:50.994045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:29.184 [2024-12-10 11:34:50.994501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:26:29.184 [2024-12-10 11:34:50.994664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.001860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.002033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:29.184 [2024-12-10 11:34:51.002157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.071 ms 00:26:29.184 [2024-12-10 11:34:51.002206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.036169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.036416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:29.184 [2024-12-10 11:34:51.036573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.733 ms 00:26:29.184 [2024-12-10 11:34:51.036715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.055601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.055833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:29.184 [2024-12-10 11:34:51.056641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.773 ms 00:26:29.184 [2024-12-10 11:34:51.056891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.057136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.057207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.184 [2024-12-10 11:34:51.057330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:26:29.184 [2024-12-10 11:34:51.057385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.087269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.087469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:29.184 [2024-12-10 11:34:51.087592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.815 ms 00:26:29.184 [2024-12-10 11:34:51.087665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.115282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.115482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.184 [2024-12-10 11:34:51.115523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.479 ms 00:26:29.184 [2024-12-10 11:34:51.115535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.184 [2024-12-10 11:34:51.142715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.184 [2024-12-10 11:34:51.142914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.185 [2024-12-10 11:34:51.142940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.136 ms 00:26:29.185 [2024-12-10 11:34:51.142952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.185 [2024-12-10 11:34:51.169574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.185 [2024-12-10 11:34:51.169611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.185 [2024-12-10 11:34:51.169640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.532 ms 00:26:29.185 [2024-12-10 11:34:51.169672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.185 [2024-12-10 11:34:51.169726] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.185 [2024-12-10 11:34:51.169756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.169980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.185 [2024-12-10 11:34:51.170632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.186 [2024-12-10 11:34:51.170908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.186 [2024-12-10 11:34:51.170919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:26:29.186 [2024-12-10 11:34:51.170930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:29.186 [2024-12-10 11:34:51.170940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:29.186 [2024-12-10 11:34:51.170950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:29.186 [2024-12-10 11:34:51.170961] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:29.186 [2024-12-10 11:34:51.170983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.186 [2024-12-10 11:34:51.170994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.186 [2024-12-10 11:34:51.171003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.186 [2024-12-10 11:34:51.171013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.186 [2024-12-10 11:34:51.171022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.186 [2024-12-10 11:34:51.171033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.186 [2024-12-10 11:34:51.171043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.186 [2024-12-10 11:34:51.171054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:26:29.186 [2024-12-10 11:34:51.171070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.187071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.186 [2024-12-10 11:34:51.187106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.186 [2024-12-10 11:34:51.187136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.959 ms 00:26:29.186 [2024-12-10 11:34:51.187146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.187554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.186 [2024-12-10 11:34:51.187568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.186 [2024-12-10 11:34:51.187586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:26:29.186 [2024-12-10 11:34:51.187596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.226788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.186 [2024-12-10 11:34:51.226999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.186 [2024-12-10 11:34:51.227025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.186 [2024-12-10 11:34:51.227036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.227095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.186 [2024-12-10 11:34:51.227111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.186 [2024-12-10 11:34:51.227129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.186 [2024-12-10 11:34:51.227140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.227254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.186 [2024-12-10 11:34:51.227273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.186 [2024-12-10 11:34:51.227286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.186 [2024-12-10 11:34:51.227296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.227317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.186 [2024-12-10 11:34:51.227330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.186 [2024-12-10 11:34:51.227342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.186 [2024-12-10 11:34:51.227359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.186 [2024-12-10 11:34:51.329468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.186 [2024-12-10 11:34:51.329571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.186 [2024-12-10 11:34:51.329589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.186 [2024-12-10 11:34:51.329599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.421909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.421994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.445 [2024-12-10 11:34:51.422030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.445 [2024-12-10 11:34:51.422185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.445 [2024-12-10 11:34:51.422268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.445 [2024-12-10 11:34:51.422468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.445 [2024-12-10 11:34:51.422558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.445 [2024-12-10 11:34:51.422701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.445 [2024-12-10 11:34:51.422784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.445 [2024-12-10 11:34:51.422797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.445 [2024-12-10 11:34:51.422814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.445 [2024-12-10 11:34:51.422958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.217 ms, result 0 00:26:30.383 00:26:30.383 00:26:30.383 11:34:52 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:32.916 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:32.916 11:34:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:32.916 [2024-12-10 11:34:54.750720] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:26:32.916 [2024-12-10 11:34:54.750882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80332 ] 00:26:32.916 [2024-12-10 11:34:54.930501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.916 [2024-12-10 11:34:55.071777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.481 [2024-12-10 11:34:55.420111] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.481 [2024-12-10 11:34:55.420189] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.481 [2024-12-10 11:34:55.586479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.481 [2024-12-10 11:34:55.586556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:33.481 [2024-12-10 11:34:55.586587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.481 [2024-12-10 11:34:55.586600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.481 [2024-12-10 11:34:55.586692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.481 [2024-12-10 11:34:55.586724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.481 [2024-12-10 11:34:55.586738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:33.481 [2024-12-10 11:34:55.586748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.586786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:33.482 [2024-12-10 11:34:55.587801] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:33.482 [2024-12-10 11:34:55.587847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.587862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.482 [2024-12-10 11:34:55.587875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:26:33.482 [2024-12-10 11:34:55.587885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.589119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:33.482 [2024-12-10 11:34:55.606485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.606670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:33.482 [2024-12-10 11:34:55.606700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:26:33.482 [2024-12-10 11:34:55.606713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.606808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.606829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:33.482 [2024-12-10 11:34:55.606842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:33.482 [2024-12-10 11:34:55.606853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.611171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.611219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.482 [2024-12-10 11:34:55.611236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:26:33.482 [2024-12-10 11:34:55.611254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.611348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.611366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.482 [2024-12-10 11:34:55.611379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:33.482 [2024-12-10 11:34:55.611389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.611447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.611463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:33.482 [2024-12-10 11:34:55.611476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:33.482 [2024-12-10 11:34:55.611486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.611524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:33.482 [2024-12-10 11:34:55.615789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.615828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.482 [2024-12-10 11:34:55.615849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:26:33.482 [2024-12-10 11:34:55.615860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.615902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.615916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:33.482 [2024-12-10 11:34:55.615928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:33.482 [2024-12-10 11:34:55.615939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.615999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:33.482 [2024-12-10 11:34:55.616032] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:33.482 [2024-12-10 11:34:55.616075] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:33.482 [2024-12-10 11:34:55.616099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:33.482 [2024-12-10 11:34:55.616210] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:33.482 [2024-12-10 11:34:55.616225] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:33.482 [2024-12-10 11:34:55.616240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:33.482 [2024-12-10 11:34:55.616267] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616282] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:33.482 [2024-12-10 11:34:55.616305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:33.482 [2024-12-10 11:34:55.616329] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:33.482 [2024-12-10 11:34:55.616339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:33.482 [2024-12-10 11:34:55.616351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.616362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:33.482 [2024-12-10 11:34:55.616373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:26:33.482 [2024-12-10 11:34:55.616384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.616484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.482 [2024-12-10 11:34:55.616499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:33.482 [2024-12-10 11:34:55.616510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:33.482 [2024-12-10 11:34:55.616520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.482 [2024-12-10 11:34:55.616658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:33.482 [2024-12-10 11:34:55.616677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:33.482 [2024-12-10 11:34:55.616689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:33.482 [2024-12-10 11:34:55.616721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:33.482 [2024-12-10 11:34:55.616754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.482 [2024-12-10 11:34:55.616774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:33.482 [2024-12-10 11:34:55.616784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:33.482 [2024-12-10 11:34:55.616794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.482 [2024-12-10 11:34:55.616817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:33.482 [2024-12-10 11:34:55.616828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:33.482 [2024-12-10 11:34:55.616839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:33.482 [2024-12-10 11:34:55.616859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:33.482 [2024-12-10 11:34:55.616889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:33.482 [2024-12-10 11:34:55.616918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:33.482 [2024-12-10 11:34:55.616947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:33.482 [2024-12-10 11:34:55.616977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:33.482 [2024-12-10 11:34:55.616986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.482 [2024-12-10 11:34:55.616997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:33.482 [2024-12-10 11:34:55.617007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:33.482 [2024-12-10 11:34:55.617016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.482 [2024-12-10 11:34:55.617026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:33.482 [2024-12-10 11:34:55.617036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:33.482 [2024-12-10 11:34:55.617046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.482 [2024-12-10 11:34:55.617056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:33.482 [2024-12-10 11:34:55.617066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:33.482 [2024-12-10 11:34:55.617076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.617085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:33.482 [2024-12-10 11:34:55.617095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:33.482 [2024-12-10 11:34:55.617107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.617117] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:33.482 [2024-12-10 11:34:55.617129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:33.482 [2024-12-10 11:34:55.617139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.482 [2024-12-10 11:34:55.617150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.482 [2024-12-10 11:34:55.617161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:33.482 [2024-12-10 11:34:55.617172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:33.483 [2024-12-10 11:34:55.617181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:33.483 [2024-12-10 11:34:55.617192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:33.483 [2024-12-10 11:34:55.617201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:33.483 [2024-12-10 11:34:55.617211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:33.483 [2024-12-10 11:34:55.617223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:33.483 [2024-12-10 11:34:55.617237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:33.483 [2024-12-10 11:34:55.617265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:33.483 [2024-12-10 11:34:55.617276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:33.483 [2024-12-10 11:34:55.617286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:33.483 [2024-12-10 11:34:55.617297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:33.483 [2024-12-10 11:34:55.617308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:33.483 [2024-12-10 11:34:55.617318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:33.483 [2024-12-10 11:34:55.617329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:33.483 [2024-12-10 11:34:55.617340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:33.483 [2024-12-10 11:34:55.617352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:33.483 [2024-12-10 11:34:55.617410] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:33.483 [2024-12-10 11:34:55.617422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:33.483 [2024-12-10 11:34:55.617446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:33.483 [2024-12-10 11:34:55.617458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:33.483 [2024-12-10 11:34:55.617469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:33.483 [2024-12-10 11:34:55.617481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.483 [2024-12-10 11:34:55.617492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:33.483 [2024-12-10 11:34:55.617503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:26:33.483 [2024-12-10 11:34:55.617514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.659086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.659347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.742 [2024-12-10 11:34:55.659534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.484 ms 00:26:33.742 [2024-12-10 11:34:55.659767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.660090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.660270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:33.742 [2024-12-10 11:34:55.660442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:33.742 [2024-12-10 11:34:55.660610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.716699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.716914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.742 [2024-12-10 11:34:55.717025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.765 ms 00:26:33.742 [2024-12-10 11:34:55.717081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.717281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.717338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.742 [2024-12-10 11:34:55.717593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.742 [2024-12-10 11:34:55.717681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.718253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.718389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.742 [2024-12-10 11:34:55.718498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:26:33.742 [2024-12-10 11:34:55.718725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.718937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.719013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.742 [2024-12-10 11:34:55.719154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:26:33.742 [2024-12-10 11:34:55.719246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.735991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.736163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.742 [2024-12-10 11:34:55.736292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.680 ms 00:26:33.742 [2024-12-10 11:34:55.736408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.753562] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:33.742 [2024-12-10 11:34:55.753819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:33.742 [2024-12-10 11:34:55.754000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.754113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:33.742 [2024-12-10 11:34:55.754161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.421 ms 00:26:33.742 [2024-12-10 11:34:55.754432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.791263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.791502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:33.742 [2024-12-10 11:34:55.791660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.677 ms 00:26:33.742 [2024-12-10 11:34:55.791834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.813207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.813466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:33.742 [2024-12-10 11:34:55.813504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.187 ms 00:26:33.742 [2024-12-10 11:34:55.813525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.834514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.834570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:33.742 [2024-12-10 11:34:55.834624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.925 ms 00:26:33.742 [2024-12-10 11:34:55.834642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.742 [2024-12-10 11:34:55.835830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.742 [2024-12-10 11:34:55.836100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:33.743 [2024-12-10 11:34:55.836147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:26:33.743 [2024-12-10 11:34:55.836169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.001 [2024-12-10 11:34:55.909065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.001 [2024-12-10 11:34:55.909132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:34.001 [2024-12-10 11:34:55.909157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.846 ms 00:26:34.001 [2024-12-10 11:34:55.909167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.001 [2024-12-10 11:34:55.920776] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:34.001 [2024-12-10 11:34:55.923025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.001 [2024-12-10 11:34:55.923055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:34.001 [2024-12-10 11:34:55.923086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:26:34.001 [2024-12-10 11:34:55.923096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.001 [2024-12-10 11:34:55.923198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.001 [2024-12-10 11:34:55.923215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:34.001 [2024-12-10 11:34:55.923229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:34.001 [2024-12-10 11:34:55.923239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.001 [2024-12-10 11:34:55.923340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.002 [2024-12-10 11:34:55.923357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:34.002 [2024-12-10 11:34:55.923367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:34.002 [2024-12-10 11:34:55.923376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.002 [2024-12-10 11:34:55.923403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.002 [2024-12-10 11:34:55.923415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:34.002 [2024-12-10 11:34:55.923425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:34.002 [2024-12-10 11:34:55.923433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.002 [2024-12-10 11:34:55.923468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:34.002 [2024-12-10 11:34:55.923482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.002 [2024-12-10 11:34:55.923491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:34.002 [2024-12-10 11:34:55.923500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:34.002 [2024-12-10 11:34:55.923509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.002 [2024-12-10 11:34:55.952544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.002 [2024-12-10 11:34:55.952584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:34.002 [2024-12-10 11:34:55.952622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.010 ms 00:26:34.002 [2024-12-10 11:34:55.952632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.002 [2024-12-10 11:34:55.952749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.002 [2024-12-10 11:34:55.952785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:34.002 [2024-12-10 11:34:55.952797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:34.002 [2024-12-10 11:34:55.952807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.002 [2024-12-10 11:34:55.954102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.003 ms, result 0 00:26:34.938  [2024-12-10T11:34:58.040Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-10T11:34:58.976Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-10T11:35:00.352Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-10T11:35:01.288Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-10T11:35:02.252Z] Copying: 118/1024 [MB] (24 MBps) [2024-12-10T11:35:03.188Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-10T11:35:04.158Z] Copying: 165/1024 [MB] (23 MBps) [2024-12-10T11:35:05.098Z] Copying: 190/1024 [MB] (24 MBps) [2024-12-10T11:35:06.033Z] Copying: 214/1024 [MB] (24 MBps) [2024-12-10T11:35:06.968Z] Copying: 238/1024 [MB] (24 MBps) [2024-12-10T11:35:08.344Z] Copying: 262/1024 [MB] (24 MBps) [2024-12-10T11:35:09.279Z] Copying: 287/1024 [MB] (24 MBps) [2024-12-10T11:35:10.215Z] Copying: 311/1024 [MB] (24 MBps) [2024-12-10T11:35:11.149Z] Copying: 335/1024 [MB] (24 MBps) [2024-12-10T11:35:12.084Z] Copying: 359/1024 [MB] (23 MBps) [2024-12-10T11:35:13.018Z] Copying: 386/1024 [MB] (27 MBps) [2024-12-10T11:35:14.392Z] Copying: 412/1024 [MB] (25 MBps) [2024-12-10T11:35:14.986Z] Copying: 437/1024 [MB] (25 MBps) [2024-12-10T11:35:16.361Z] Copying: 464/1024 [MB] (26 MBps) [2024-12-10T11:35:17.295Z] Copying: 491/1024 [MB] (26 MBps) [2024-12-10T11:35:18.230Z] Copying: 517/1024 [MB] (26 MBps) [2024-12-10T11:35:19.164Z] Copying: 543/1024 [MB] (26 MBps) [2024-12-10T11:35:20.100Z] Copying: 570/1024 [MB] (26 MBps) [2024-12-10T11:35:21.035Z] Copying: 595/1024 [MB] (25 MBps) [2024-12-10T11:35:21.972Z] Copying: 622/1024 [MB] (26 MBps) [2024-12-10T11:35:23.349Z] Copying: 648/1024 [MB] (25 MBps) [2024-12-10T11:35:24.286Z] Copying: 674/1024 [MB] (25 MBps) [2024-12-10T11:35:25.226Z] Copying: 701/1024 [MB] (27 MBps) [2024-12-10T11:35:26.159Z] Copying: 726/1024 [MB] (25 MBps) [2024-12-10T11:35:27.094Z] Copying: 752/1024 [MB] (25 MBps) [2024-12-10T11:35:28.029Z] Copying: 779/1024 [MB] (26 MBps) [2024-12-10T11:35:29.405Z] Copying: 804/1024 [MB] (25 MBps) [2024-12-10T11:35:29.971Z] Copying: 829/1024 [MB] (25 MBps) [2024-12-10T11:35:31.348Z] Copying: 855/1024 [MB] (25 MBps) [2024-12-10T11:35:32.285Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-10T11:35:33.250Z] Copying: 903/1024 [MB] (23 MBps) [2024-12-10T11:35:34.189Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-10T11:35:35.124Z] Copying: 951/1024 [MB] (24 MBps) [2024-12-10T11:35:36.059Z] Copying: 975/1024 [MB] (24 MBps) [2024-12-10T11:35:37.003Z] Copying: 999/1024 [MB] (24 MBps) [2024-12-10T11:35:38.381Z] Copying: 1023/1024 [MB] (23 MBps) [2024-12-10T11:35:38.381Z] Copying: 1048492/1048576 [kB] (904 kBps) [2024-12-10T11:35:38.381Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 11:35:38.073180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.073258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:16.214 [2024-12-10 11:35:38.073302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:16.214 [2024-12-10 11:35:38.073312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.074771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:16.214 [2024-12-10 11:35:38.079905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.079947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:16.214 [2024-12-10 11:35:38.079966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:27:16.214 [2024-12-10 11:35:38.080001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.092158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.092216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:16.214 [2024-12-10 11:35:38.092249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.600 ms 00:27:16.214 [2024-12-10 11:35:38.092269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.112925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.113123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:16.214 [2024-12-10 11:35:38.113150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.636 ms 00:27:16.214 [2024-12-10 11:35:38.113161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.119053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.119082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:16.214 [2024-12-10 11:35:38.119111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.849 ms 00:27:16.214 [2024-12-10 11:35:38.119127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.147546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.147592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:16.214 [2024-12-10 11:35:38.147626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.369 ms 00:27:16.214 [2024-12-10 11:35:38.147637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.165376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.165417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:16.214 [2024-12-10 11:35:38.165449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.673 ms 00:27:16.214 [2024-12-10 11:35:38.165459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.275933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.276007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:16.214 [2024-12-10 11:35:38.276027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.430 ms 00:27:16.214 [2024-12-10 11:35:38.276038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.303978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.304209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:16.214 [2024-12-10 11:35:38.304237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.917 ms 00:27:16.214 [2024-12-10 11:35:38.304249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.336757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.336939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:16.214 [2024-12-10 11:35:38.336968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.463 ms 00:27:16.214 [2024-12-10 11:35:38.336982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.214 [2024-12-10 11:35:38.367456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.214 [2024-12-10 11:35:38.367494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:16.214 [2024-12-10 11:35:38.367526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.412 ms 00:27:16.214 [2024-12-10 11:35:38.367536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.474 [2024-12-10 11:35:38.397095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.474 [2024-12-10 11:35:38.397148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:16.474 [2024-12-10 11:35:38.397180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.474 ms 00:27:16.474 [2024-12-10 11:35:38.397190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.474 [2024-12-10 11:35:38.397230] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:16.474 [2024-12-10 11:35:38.397252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115456 / 261120 wr_cnt: 1 state: open 00:27:16.474 [2024-12-10 11:35:38.397265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.474 [2024-12-10 11:35:38.397397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.397989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:16.475 [2024-12-10 11:35:38.398446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:16.475 [2024-12-10 11:35:38.398457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:27:16.475 [2024-12-10 11:35:38.398470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115456 00:27:16.475 [2024-12-10 11:35:38.398481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116416 00:27:16.475 [2024-12-10 11:35:38.398491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115456 00:27:16.476 [2024-12-10 11:35:38.398503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:27:16.476 [2024-12-10 11:35:38.398532] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:16.476 [2024-12-10 11:35:38.398544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:16.476 [2024-12-10 11:35:38.398554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:16.476 [2024-12-10 11:35:38.398564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:16.476 [2024-12-10 11:35:38.398589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:16.476 [2024-12-10 11:35:38.398599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.476 [2024-12-10 11:35:38.398610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:16.476 [2024-12-10 11:35:38.398620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:27:16.476 [2024-12-10 11:35:38.398630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.413793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.476 [2024-12-10 11:35:38.413831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:16.476 [2024-12-10 11:35:38.413870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.109 ms 00:27:16.476 [2024-12-10 11:35:38.413880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.414314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.476 [2024-12-10 11:35:38.414341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:16.476 [2024-12-10 11:35:38.414355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:27:16.476 [2024-12-10 11:35:38.414365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.453292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.453343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.476 [2024-12-10 11:35:38.453376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.453387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.453454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.453469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.476 [2024-12-10 11:35:38.453480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.453491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.453611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.453635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.476 [2024-12-10 11:35:38.453646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.453670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.453726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.453743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.476 [2024-12-10 11:35:38.453753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.453763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.545179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.545244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.476 [2024-12-10 11:35:38.545276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.545286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.618268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.618547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.476 [2024-12-10 11:35:38.618589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.618602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.618749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.618768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.476 [2024-12-10 11:35:38.618780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.618796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.618837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.618851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.476 [2024-12-10 11:35:38.618861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.618885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.619025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.619043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.476 [2024-12-10 11:35:38.619069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.619084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.619146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.619163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.476 [2024-12-10 11:35:38.619174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.619184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.619256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.619269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.476 [2024-12-10 11:35:38.619280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.619290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.619345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.476 [2024-12-10 11:35:38.619377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.476 [2024-12-10 11:35:38.619388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.476 [2024-12-10 11:35:38.619398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.476 [2024-12-10 11:35:38.619531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.308 ms, result 0 00:27:17.853 00:27:17.853 00:27:17.853 11:35:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:18.112 [2024-12-10 11:35:40.113884] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:18.112 [2024-12-10 11:35:40.114067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80774 ] 00:27:18.371 [2024-12-10 11:35:40.294108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.371 [2024-12-10 11:35:40.390782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.629 [2024-12-10 11:35:40.674330] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.629 [2024-12-10 11:35:40.674431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.890 [2024-12-10 11:35:40.834110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.834381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.890 [2024-12-10 11:35:40.834413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.890 [2024-12-10 11:35:40.834426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.834504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.834527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.890 [2024-12-10 11:35:40.834540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:18.890 [2024-12-10 11:35:40.834550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.834582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.890 [2024-12-10 11:35:40.835525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.890 [2024-12-10 11:35:40.835555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.835568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.890 [2024-12-10 11:35:40.835579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:27:18.890 [2024-12-10 11:35:40.835589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.836854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:18.890 [2024-12-10 11:35:40.853364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.853534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:18.890 [2024-12-10 11:35:40.853562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.511 ms 00:27:18.890 [2024-12-10 11:35:40.853577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.853682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.853703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:18.890 [2024-12-10 11:35:40.853716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:18.890 [2024-12-10 11:35:40.853727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.858265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.858308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.890 [2024-12-10 11:35:40.858340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:27:18.890 [2024-12-10 11:35:40.858358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.858451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.858469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.890 [2024-12-10 11:35:40.858481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:18.890 [2024-12-10 11:35:40.858491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.858566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.858584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.890 [2024-12-10 11:35:40.858596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:18.890 [2024-12-10 11:35:40.858606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.858646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.890 [2024-12-10 11:35:40.862906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.863075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.890 [2024-12-10 11:35:40.863109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.269 ms 00:27:18.890 [2024-12-10 11:35:40.863122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.863170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.863186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.890 [2024-12-10 11:35:40.863199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:18.890 [2024-12-10 11:35:40.863209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.863261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:18.890 [2024-12-10 11:35:40.863293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:18.890 [2024-12-10 11:35:40.863337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:18.890 [2024-12-10 11:35:40.863362] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:18.890 [2024-12-10 11:35:40.863488] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.890 [2024-12-10 11:35:40.863502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.890 [2024-12-10 11:35:40.863516] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.890 [2024-12-10 11:35:40.863530] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.890 [2024-12-10 11:35:40.863543] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.890 [2024-12-10 11:35:40.863554] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.890 [2024-12-10 11:35:40.863579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.890 [2024-12-10 11:35:40.863594] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.890 [2024-12-10 11:35:40.863618] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.890 [2024-12-10 11:35:40.863628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.863639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.890 [2024-12-10 11:35:40.863649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:27:18.890 [2024-12-10 11:35:40.863659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.863809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.890 [2024-12-10 11:35:40.863828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.890 [2024-12-10 11:35:40.863841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:18.890 [2024-12-10 11:35:40.863851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.890 [2024-12-10 11:35:40.864018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.890 [2024-12-10 11:35:40.864038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.890 [2024-12-10 11:35:40.864050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.890 [2024-12-10 11:35:40.864062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.890 [2024-12-10 11:35:40.864073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.890 [2024-12-10 11:35:40.864082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.890 [2024-12-10 11:35:40.864092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.890 [2024-12-10 11:35:40.864103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.890 [2024-12-10 11:35:40.864113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.890 [2024-12-10 11:35:40.864123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.890 [2024-12-10 11:35:40.864133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.890 [2024-12-10 11:35:40.864144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.890 [2024-12-10 11:35:40.864153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.890 [2024-12-10 11:35:40.864177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.890 [2024-12-10 11:35:40.864188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.891 [2024-12-10 11:35:40.864198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.891 [2024-12-10 11:35:40.864218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.891 [2024-12-10 11:35:40.864248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.891 [2024-12-10 11:35:40.864278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.891 [2024-12-10 11:35:40.864307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.891 [2024-12-10 11:35:40.864337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.891 [2024-12-10 11:35:40.864366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.891 [2024-12-10 11:35:40.864385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.891 [2024-12-10 11:35:40.864395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.891 [2024-12-10 11:35:40.864405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.891 [2024-12-10 11:35:40.864415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.891 [2024-12-10 11:35:40.864426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.891 [2024-12-10 11:35:40.864435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.891 [2024-12-10 11:35:40.864465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.891 [2024-12-10 11:35:40.864475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864486] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.891 [2024-12-10 11:35:40.864498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.891 [2024-12-10 11:35:40.864508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.891 [2024-12-10 11:35:40.864530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.891 [2024-12-10 11:35:40.864540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.891 [2024-12-10 11:35:40.864550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.891 [2024-12-10 11:35:40.864561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.891 [2024-12-10 11:35:40.864571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.891 [2024-12-10 11:35:40.864580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.891 [2024-12-10 11:35:40.864592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.891 [2024-12-10 11:35:40.864605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.891 [2024-12-10 11:35:40.864633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.891 [2024-12-10 11:35:40.864658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.891 [2024-12-10 11:35:40.864672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.891 [2024-12-10 11:35:40.864683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.891 [2024-12-10 11:35:40.864694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.891 [2024-12-10 11:35:40.864705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.891 [2024-12-10 11:35:40.864716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.891 [2024-12-10 11:35:40.864726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.891 [2024-12-10 11:35:40.864737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.891 [2024-12-10 11:35:40.864792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.891 [2024-12-10 11:35:40.864805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.891 [2024-12-10 11:35:40.864828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.891 [2024-12-10 11:35:40.864839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.891 [2024-12-10 11:35:40.864850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.891 [2024-12-10 11:35:40.864862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.864873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.891 [2024-12-10 11:35:40.864885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:27:18.891 [2024-12-10 11:35:40.864895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.898794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.898851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.891 [2024-12-10 11:35:40.898886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.836 ms 00:27:18.891 [2024-12-10 11:35:40.898901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.899006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.899019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:18.891 [2024-12-10 11:35:40.899047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:18.891 [2024-12-10 11:35:40.899056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.951903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.951960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.891 [2024-12-10 11:35:40.952020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.754 ms 00:27:18.891 [2024-12-10 11:35:40.952032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.952108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.952125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.891 [2024-12-10 11:35:40.952145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:18.891 [2024-12-10 11:35:40.952155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.952621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.952647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.891 [2024-12-10 11:35:40.952662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:27:18.891 [2024-12-10 11:35:40.952673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.952851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.952871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.891 [2024-12-10 11:35:40.952888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:27:18.891 [2024-12-10 11:35:40.952899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.969505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.969556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.891 [2024-12-10 11:35:40.969589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.577 ms 00:27:18.891 [2024-12-10 11:35:40.969600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:40.985664] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:18.891 [2024-12-10 11:35:40.985706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:18.891 [2024-12-10 11:35:40.985740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:40.985752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:18.891 [2024-12-10 11:35:40.985764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.974 ms 00:27:18.891 [2024-12-10 11:35:40.985773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.891 [2024-12-10 11:35:41.016374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.891 [2024-12-10 11:35:41.016472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:18.891 [2024-12-10 11:35:41.016492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.550 ms 00:27:18.891 [2024-12-10 11:35:41.016519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.892 [2024-12-10 11:35:41.033309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.892 [2024-12-10 11:35:41.033511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:18.892 [2024-12-10 11:35:41.033541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.706 ms 00:27:18.892 [2024-12-10 11:35:41.033553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.892 [2024-12-10 11:35:41.049652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.892 [2024-12-10 11:35:41.049740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:18.892 [2024-12-10 11:35:41.049760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.037 ms 00:27:18.892 [2024-12-10 11:35:41.049772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.892 [2024-12-10 11:35:41.050707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.892 [2024-12-10 11:35:41.050735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:18.892 [2024-12-10 11:35:41.050756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:27:18.892 [2024-12-10 11:35:41.050767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.123510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.123613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:19.151 [2024-12-10 11:35:41.123703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.710 ms 00:27:19.151 [2024-12-10 11:35:41.123717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.136291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:19.151 [2024-12-10 11:35:41.138955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.139005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.151 [2024-12-10 11:35:41.139038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.166 ms 00:27:19.151 [2024-12-10 11:35:41.139048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.139162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.139198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:19.151 [2024-12-10 11:35:41.139215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:19.151 [2024-12-10 11:35:41.139226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.140874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.141105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.151 [2024-12-10 11:35:41.141131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:27:19.151 [2024-12-10 11:35:41.141143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.141186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.141202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.151 [2024-12-10 11:35:41.141214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:19.151 [2024-12-10 11:35:41.141224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.141273] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:19.151 [2024-12-10 11:35:41.141289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.141299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:19.151 [2024-12-10 11:35:41.141310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:19.151 [2024-12-10 11:35:41.141321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.171651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.171698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.151 [2024-12-10 11:35:41.171741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.283 ms 00:27:19.151 [2024-12-10 11:35:41.171760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.171846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.151 [2024-12-10 11:35:41.171865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.151 [2024-12-10 11:35:41.171892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:19.151 [2024-12-10 11:35:41.171903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.151 [2024-12-10 11:35:41.173124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.423 ms, result 0 00:27:20.528  [2024-12-10T11:35:43.631Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-10T11:35:44.567Z] Copying: 43/1024 [MB] (23 MBps) [2024-12-10T11:35:45.501Z] Copying: 67/1024 [MB] (23 MBps) [2024-12-10T11:35:46.437Z] Copying: 91/1024 [MB] (24 MBps) [2024-12-10T11:35:47.815Z] Copying: 115/1024 [MB] (24 MBps) [2024-12-10T11:35:48.750Z] Copying: 139/1024 [MB] (23 MBps) [2024-12-10T11:35:49.716Z] Copying: 162/1024 [MB] (23 MBps) [2024-12-10T11:35:50.666Z] Copying: 185/1024 [MB] (23 MBps) [2024-12-10T11:35:51.602Z] Copying: 208/1024 [MB] (23 MBps) [2024-12-10T11:35:52.539Z] Copying: 232/1024 [MB] (23 MBps) [2024-12-10T11:35:53.475Z] Copying: 255/1024 [MB] (23 MBps) [2024-12-10T11:35:54.412Z] Copying: 278/1024 [MB] (23 MBps) [2024-12-10T11:35:55.788Z] Copying: 300/1024 [MB] (21 MBps) [2024-12-10T11:35:56.723Z] Copying: 324/1024 [MB] (23 MBps) [2024-12-10T11:35:57.658Z] Copying: 347/1024 [MB] (23 MBps) [2024-12-10T11:35:58.593Z] Copying: 370/1024 [MB] (23 MBps) [2024-12-10T11:35:59.527Z] Copying: 395/1024 [MB] (24 MBps) [2024-12-10T11:36:00.463Z] Copying: 419/1024 [MB] (23 MBps) [2024-12-10T11:36:01.399Z] Copying: 443/1024 [MB] (23 MBps) [2024-12-10T11:36:02.776Z] Copying: 465/1024 [MB] (22 MBps) [2024-12-10T11:36:03.712Z] Copying: 487/1024 [MB] (22 MBps) [2024-12-10T11:36:04.648Z] Copying: 510/1024 [MB] (22 MBps) [2024-12-10T11:36:05.584Z] Copying: 532/1024 [MB] (22 MBps) [2024-12-10T11:36:06.518Z] Copying: 556/1024 [MB] (24 MBps) [2024-12-10T11:36:07.455Z] Copying: 580/1024 [MB] (24 MBps) [2024-12-10T11:36:08.390Z] Copying: 604/1024 [MB] (23 MBps) [2024-12-10T11:36:09.766Z] Copying: 628/1024 [MB] (24 MBps) [2024-12-10T11:36:10.721Z] Copying: 653/1024 [MB] (25 MBps) [2024-12-10T11:36:11.657Z] Copying: 680/1024 [MB] (26 MBps) [2024-12-10T11:36:12.594Z] Copying: 705/1024 [MB] (25 MBps) [2024-12-10T11:36:13.531Z] Copying: 731/1024 [MB] (25 MBps) [2024-12-10T11:36:14.466Z] Copying: 756/1024 [MB] (25 MBps) [2024-12-10T11:36:15.402Z] Copying: 782/1024 [MB] (25 MBps) [2024-12-10T11:36:16.777Z] Copying: 808/1024 [MB] (26 MBps) [2024-12-10T11:36:17.711Z] Copying: 835/1024 [MB] (26 MBps) [2024-12-10T11:36:18.646Z] Copying: 861/1024 [MB] (25 MBps) [2024-12-10T11:36:19.582Z] Copying: 886/1024 [MB] (24 MBps) [2024-12-10T11:36:20.532Z] Copying: 911/1024 [MB] (24 MBps) [2024-12-10T11:36:21.468Z] Copying: 934/1024 [MB] (23 MBps) [2024-12-10T11:36:22.402Z] Copying: 959/1024 [MB] (24 MBps) [2024-12-10T11:36:23.777Z] Copying: 983/1024 [MB] (24 MBps) [2024-12-10T11:36:24.344Z] Copying: 1008/1024 [MB] (24 MBps) [2024-12-10T11:36:24.602Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 11:36:24.517516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.517621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:02.435 [2024-12-10 11:36:24.517703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:02.435 [2024-12-10 11:36:24.517725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.435 [2024-12-10 11:36:24.517783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:02.435 [2024-12-10 11:36:24.523575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.523665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:02.435 [2024-12-10 11:36:24.523695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.759 ms 00:28:02.435 [2024-12-10 11:36:24.523723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.435 [2024-12-10 11:36:24.524100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.524135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:02.435 [2024-12-10 11:36:24.524153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:28:02.435 [2024-12-10 11:36:24.524187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.435 [2024-12-10 11:36:24.531973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.532053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:02.435 [2024-12-10 11:36:24.532087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.743 ms 00:28:02.435 [2024-12-10 11:36:24.532114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.435 [2024-12-10 11:36:24.541849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.542107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:02.435 [2024-12-10 11:36:24.542155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.672 ms 00:28:02.435 [2024-12-10 11:36:24.542186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.435 [2024-12-10 11:36:24.588029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.435 [2024-12-10 11:36:24.588091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:02.435 [2024-12-10 11:36:24.588114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.763 ms 00:28:02.435 [2024-12-10 11:36:24.588128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.694 [2024-12-10 11:36:24.609199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.694 [2024-12-10 11:36:24.609432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:02.694 [2024-12-10 11:36:24.609466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.015 ms 00:28:02.694 [2024-12-10 11:36:24.609482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.694 [2024-12-10 11:36:24.725053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.694 [2024-12-10 11:36:24.725142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:02.694 [2024-12-10 11:36:24.725167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.510 ms 00:28:02.695 [2024-12-10 11:36:24.725182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.695 [2024-12-10 11:36:24.764798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.695 [2024-12-10 11:36:24.764849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:02.695 [2024-12-10 11:36:24.764880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.589 ms 00:28:02.695 [2024-12-10 11:36:24.764893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.695 [2024-12-10 11:36:24.802235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.695 [2024-12-10 11:36:24.802490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:02.695 [2024-12-10 11:36:24.802520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.290 ms 00:28:02.695 [2024-12-10 11:36:24.802532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.695 [2024-12-10 11:36:24.836505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.695 [2024-12-10 11:36:24.836545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:02.695 [2024-12-10 11:36:24.836577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.927 ms 00:28:02.695 [2024-12-10 11:36:24.836604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.955 [2024-12-10 11:36:24.870449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.955 [2024-12-10 11:36:24.870490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:02.955 [2024-12-10 11:36:24.870538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.737 ms 00:28:02.955 [2024-12-10 11:36:24.870564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.955 [2024-12-10 11:36:24.870622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:02.955 [2024-12-10 11:36:24.870674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:02.955 [2024-12-10 11:36:24.870724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.870994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:02.955 [2024-12-10 11:36:24.871564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:02.956 [2024-12-10 11:36:24.871874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:02.956 [2024-12-10 11:36:24.871885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e6d53703-a958-44d8-b655-6b7e2fd3fe66 00:28:02.956 [2024-12-10 11:36:24.871896] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:02.956 [2024-12-10 11:36:24.871907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16576 00:28:02.956 [2024-12-10 11:36:24.871917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15616 00:28:02.956 [2024-12-10 11:36:24.871928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0615 00:28:02.956 [2024-12-10 11:36:24.871946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:02.956 [2024-12-10 11:36:24.871970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:02.956 [2024-12-10 11:36:24.871982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:02.956 [2024-12-10 11:36:24.871992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:02.956 [2024-12-10 11:36:24.872012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:02.956 [2024-12-10 11:36:24.872024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.956 [2024-12-10 11:36:24.872035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:02.956 [2024-12-10 11:36:24.872047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:28:02.956 [2024-12-10 11:36:24.872058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.889693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.956 [2024-12-10 11:36:24.889766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:02.956 [2024-12-10 11:36:24.889792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.591 ms 00:28:02.956 [2024-12-10 11:36:24.889804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.890233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.956 [2024-12-10 11:36:24.890265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:02.956 [2024-12-10 11:36:24.890279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:28:02.956 [2024-12-10 11:36:24.890290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.936325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.956 [2024-12-10 11:36:24.936385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:02.956 [2024-12-10 11:36:24.936403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.956 [2024-12-10 11:36:24.936414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.936516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.956 [2024-12-10 11:36:24.936531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:02.956 [2024-12-10 11:36:24.936543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.956 [2024-12-10 11:36:24.936553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.936688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.956 [2024-12-10 11:36:24.936710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:02.956 [2024-12-10 11:36:24.936730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.956 [2024-12-10 11:36:24.936741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:24.936781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.956 [2024-12-10 11:36:24.936795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:02.956 [2024-12-10 11:36:24.936807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.956 [2024-12-10 11:36:24.936818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.956 [2024-12-10 11:36:25.047296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.956 [2024-12-10 11:36:25.047385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:02.956 [2024-12-10 11:36:25.047435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.956 [2024-12-10 11:36:25.047446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.214 [2024-12-10 11:36:25.137109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.214 [2024-12-10 11:36:25.137120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.214 [2024-12-10 11:36:25.137247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.214 [2024-12-10 11:36:25.137266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.214 [2024-12-10 11:36:25.137341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.214 [2024-12-10 11:36:25.137352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.214 [2024-12-10 11:36:25.137501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.214 [2024-12-10 11:36:25.137512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:03.214 [2024-12-10 11:36:25.137596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.214 [2024-12-10 11:36:25.137606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.214 [2024-12-10 11:36:25.137670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.214 [2024-12-10 11:36:25.137689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.214 [2024-12-10 11:36:25.137701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.215 [2024-12-10 11:36:25.137711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.215 [2024-12-10 11:36:25.137768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.215 [2024-12-10 11:36:25.137785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.215 [2024-12-10 11:36:25.137796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.215 [2024-12-10 11:36:25.137807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.215 [2024-12-10 11:36:25.137948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 620.426 ms, result 0 00:28:04.151 00:28:04.151 00:28:04.151 11:36:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:06.684 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:06.684 Process with pid 79237 is not found 00:28:06.684 Remove shared memory files 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79237 00:28:06.684 11:36:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79237 ']' 00:28:06.684 11:36:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79237 00:28:06.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79237) - No such process 00:28:06.684 11:36:28 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79237 is not found' 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:06.684 11:36:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:06.684 00:28:06.684 real 3m20.008s 00:28:06.684 user 3m5.363s 00:28:06.684 sys 0m16.365s 00:28:06.684 ************************************ 00:28:06.684 END TEST ftl_restore 00:28:06.684 ************************************ 00:28:06.684 11:36:28 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.684 11:36:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:06.684 11:36:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:06.684 11:36:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:06.684 11:36:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.684 11:36:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:06.684 ************************************ 00:28:06.684 START TEST ftl_dirty_shutdown 00:28:06.684 ************************************ 00:28:06.684 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:06.684 * Looking for test storage... 00:28:06.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.684 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.684 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.684 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.941 --rc genhtml_branch_coverage=1 00:28:06.941 --rc genhtml_function_coverage=1 00:28:06.941 --rc genhtml_legend=1 00:28:06.941 --rc geninfo_all_blocks=1 00:28:06.941 --rc geninfo_unexecuted_blocks=1 00:28:06.941 00:28:06.941 ' 00:28:06.941 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.941 --rc genhtml_branch_coverage=1 00:28:06.941 --rc genhtml_function_coverage=1 00:28:06.941 --rc genhtml_legend=1 00:28:06.942 --rc geninfo_all_blocks=1 00:28:06.942 --rc geninfo_unexecuted_blocks=1 00:28:06.942 00:28:06.942 ' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.942 --rc genhtml_branch_coverage=1 00:28:06.942 --rc genhtml_function_coverage=1 00:28:06.942 --rc genhtml_legend=1 00:28:06.942 --rc geninfo_all_blocks=1 00:28:06.942 --rc geninfo_unexecuted_blocks=1 00:28:06.942 00:28:06.942 ' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.942 --rc genhtml_branch_coverage=1 00:28:06.942 --rc genhtml_function_coverage=1 00:28:06.942 --rc genhtml_legend=1 00:28:06.942 --rc geninfo_all_blocks=1 00:28:06.942 --rc geninfo_unexecuted_blocks=1 00:28:06.942 00:28:06.942 ' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:06.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81317 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81317 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81317 ']' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.942 11:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.942 [2024-12-10 11:36:29.082471] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:06.942 [2024-12-10 11:36:29.082819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81317 ] 00:28:07.200 [2024-12-10 11:36:29.262676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.458 [2024-12-10 11:36:29.371263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:08.394 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:08.652 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:08.911 { 00:28:08.911 "name": "nvme0n1", 00:28:08.911 "aliases": [ 00:28:08.911 "7a2d183b-673c-4541-84c1-6e25285af793" 00:28:08.911 ], 00:28:08.911 "product_name": "NVMe disk", 00:28:08.911 "block_size": 4096, 00:28:08.911 "num_blocks": 1310720, 00:28:08.911 "uuid": "7a2d183b-673c-4541-84c1-6e25285af793", 00:28:08.911 "numa_id": -1, 00:28:08.911 "assigned_rate_limits": { 00:28:08.911 "rw_ios_per_sec": 0, 00:28:08.911 "rw_mbytes_per_sec": 0, 00:28:08.911 "r_mbytes_per_sec": 0, 00:28:08.911 "w_mbytes_per_sec": 0 00:28:08.911 }, 00:28:08.911 "claimed": true, 00:28:08.911 "claim_type": "read_many_write_one", 00:28:08.911 "zoned": false, 00:28:08.911 "supported_io_types": { 00:28:08.911 "read": true, 00:28:08.911 "write": true, 00:28:08.911 "unmap": true, 00:28:08.911 "flush": true, 00:28:08.911 "reset": true, 00:28:08.911 "nvme_admin": true, 00:28:08.911 "nvme_io": true, 00:28:08.911 "nvme_io_md": false, 00:28:08.911 "write_zeroes": true, 00:28:08.911 "zcopy": false, 00:28:08.911 "get_zone_info": false, 00:28:08.911 "zone_management": false, 00:28:08.911 "zone_append": false, 00:28:08.911 "compare": true, 00:28:08.911 "compare_and_write": false, 00:28:08.911 "abort": true, 00:28:08.911 "seek_hole": false, 00:28:08.911 "seek_data": false, 00:28:08.911 "copy": true, 00:28:08.911 "nvme_iov_md": false 00:28:08.911 }, 00:28:08.911 "driver_specific": { 00:28:08.911 "nvme": [ 00:28:08.911 { 00:28:08.911 "pci_address": "0000:00:11.0", 00:28:08.911 "trid": { 00:28:08.911 "trtype": "PCIe", 00:28:08.911 "traddr": "0000:00:11.0" 00:28:08.911 }, 00:28:08.911 "ctrlr_data": { 00:28:08.911 "cntlid": 0, 00:28:08.911 "vendor_id": "0x1b36", 00:28:08.911 "model_number": "QEMU NVMe Ctrl", 00:28:08.911 "serial_number": "12341", 00:28:08.911 "firmware_revision": "8.0.0", 00:28:08.911 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:08.911 "oacs": { 00:28:08.911 "security": 0, 00:28:08.911 "format": 1, 00:28:08.911 "firmware": 0, 00:28:08.911 "ns_manage": 1 00:28:08.911 }, 00:28:08.911 "multi_ctrlr": false, 00:28:08.911 "ana_reporting": false 00:28:08.911 }, 00:28:08.911 "vs": { 00:28:08.911 "nvme_version": "1.4" 00:28:08.911 }, 00:28:08.911 "ns_data": { 00:28:08.911 "id": 1, 00:28:08.911 "can_share": false 00:28:08.911 } 00:28:08.911 } 00:28:08.911 ], 00:28:08.911 "mp_policy": "active_passive" 00:28:08.911 } 00:28:08.911 } 00:28:08.911 ]' 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:08.911 11:36:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:09.170 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=6b8dd37b-2a9d-4bac-8ae4-555b3d4af959 00:28:09.170 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:09.170 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b8dd37b-2a9d-4bac-8ae4-555b3d4af959 00:28:09.429 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:09.688 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=faa5eff3-2ebf-4896-8525-94373363f2ae 00:28:09.688 11:36:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u faa5eff3-2ebf-4896-8525-94373363f2ae 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:09.946 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:10.514 { 00:28:10.514 "name": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:10.514 "aliases": [ 00:28:10.514 "lvs/nvme0n1p0" 00:28:10.514 ], 00:28:10.514 "product_name": "Logical Volume", 00:28:10.514 "block_size": 4096, 00:28:10.514 "num_blocks": 26476544, 00:28:10.514 "uuid": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:10.514 "assigned_rate_limits": { 00:28:10.514 "rw_ios_per_sec": 0, 00:28:10.514 "rw_mbytes_per_sec": 0, 00:28:10.514 "r_mbytes_per_sec": 0, 00:28:10.514 "w_mbytes_per_sec": 0 00:28:10.514 }, 00:28:10.514 "claimed": false, 00:28:10.514 "zoned": false, 00:28:10.514 "supported_io_types": { 00:28:10.514 "read": true, 00:28:10.514 "write": true, 00:28:10.514 "unmap": true, 00:28:10.514 "flush": false, 00:28:10.514 "reset": true, 00:28:10.514 "nvme_admin": false, 00:28:10.514 "nvme_io": false, 00:28:10.514 "nvme_io_md": false, 00:28:10.514 "write_zeroes": true, 00:28:10.514 "zcopy": false, 00:28:10.514 "get_zone_info": false, 00:28:10.514 "zone_management": false, 00:28:10.514 "zone_append": false, 00:28:10.514 "compare": false, 00:28:10.514 "compare_and_write": false, 00:28:10.514 "abort": false, 00:28:10.514 "seek_hole": true, 00:28:10.514 "seek_data": true, 00:28:10.514 "copy": false, 00:28:10.514 "nvme_iov_md": false 00:28:10.514 }, 00:28:10.514 "driver_specific": { 00:28:10.514 "lvol": { 00:28:10.514 "lvol_store_uuid": "faa5eff3-2ebf-4896-8525-94373363f2ae", 00:28:10.514 "base_bdev": "nvme0n1", 00:28:10.514 "thin_provision": true, 00:28:10.514 "num_allocated_clusters": 0, 00:28:10.514 "snapshot": false, 00:28:10.514 "clone": false, 00:28:10.514 "esnap_clone": false 00:28:10.514 } 00:28:10.514 } 00:28:10.514 } 00:28:10.514 ]' 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:10.514 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:10.773 11:36:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:11.031 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:11.031 { 00:28:11.032 "name": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:11.032 "aliases": [ 00:28:11.032 "lvs/nvme0n1p0" 00:28:11.032 ], 00:28:11.032 "product_name": "Logical Volume", 00:28:11.032 "block_size": 4096, 00:28:11.032 "num_blocks": 26476544, 00:28:11.032 "uuid": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:11.032 "assigned_rate_limits": { 00:28:11.032 "rw_ios_per_sec": 0, 00:28:11.032 "rw_mbytes_per_sec": 0, 00:28:11.032 "r_mbytes_per_sec": 0, 00:28:11.032 "w_mbytes_per_sec": 0 00:28:11.032 }, 00:28:11.032 "claimed": false, 00:28:11.032 "zoned": false, 00:28:11.032 "supported_io_types": { 00:28:11.032 "read": true, 00:28:11.032 "write": true, 00:28:11.032 "unmap": true, 00:28:11.032 "flush": false, 00:28:11.032 "reset": true, 00:28:11.032 "nvme_admin": false, 00:28:11.032 "nvme_io": false, 00:28:11.032 "nvme_io_md": false, 00:28:11.032 "write_zeroes": true, 00:28:11.032 "zcopy": false, 00:28:11.032 "get_zone_info": false, 00:28:11.032 "zone_management": false, 00:28:11.032 "zone_append": false, 00:28:11.032 "compare": false, 00:28:11.032 "compare_and_write": false, 00:28:11.032 "abort": false, 00:28:11.032 "seek_hole": true, 00:28:11.032 "seek_data": true, 00:28:11.032 "copy": false, 00:28:11.032 "nvme_iov_md": false 00:28:11.032 }, 00:28:11.032 "driver_specific": { 00:28:11.032 "lvol": { 00:28:11.032 "lvol_store_uuid": "faa5eff3-2ebf-4896-8525-94373363f2ae", 00:28:11.032 "base_bdev": "nvme0n1", 00:28:11.032 "thin_provision": true, 00:28:11.032 "num_allocated_clusters": 0, 00:28:11.032 "snapshot": false, 00:28:11.032 "clone": false, 00:28:11.032 "esnap_clone": false 00:28:11.032 } 00:28:11.032 } 00:28:11.032 } 00:28:11.032 ]' 00:28:11.032 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:11.301 11:36:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:11.591 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:11.592 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c464a12e-75b8-465f-b387-13fabcd9d05e 00:28:11.858 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:11.858 { 00:28:11.858 "name": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:11.858 "aliases": [ 00:28:11.858 "lvs/nvme0n1p0" 00:28:11.858 ], 00:28:11.858 "product_name": "Logical Volume", 00:28:11.858 "block_size": 4096, 00:28:11.858 "num_blocks": 26476544, 00:28:11.858 "uuid": "c464a12e-75b8-465f-b387-13fabcd9d05e", 00:28:11.858 "assigned_rate_limits": { 00:28:11.858 "rw_ios_per_sec": 0, 00:28:11.858 "rw_mbytes_per_sec": 0, 00:28:11.858 "r_mbytes_per_sec": 0, 00:28:11.858 "w_mbytes_per_sec": 0 00:28:11.858 }, 00:28:11.858 "claimed": false, 00:28:11.858 "zoned": false, 00:28:11.858 "supported_io_types": { 00:28:11.858 "read": true, 00:28:11.858 "write": true, 00:28:11.858 "unmap": true, 00:28:11.858 "flush": false, 00:28:11.858 "reset": true, 00:28:11.858 "nvme_admin": false, 00:28:11.858 "nvme_io": false, 00:28:11.858 "nvme_io_md": false, 00:28:11.858 "write_zeroes": true, 00:28:11.858 "zcopy": false, 00:28:11.858 "get_zone_info": false, 00:28:11.858 "zone_management": false, 00:28:11.858 "zone_append": false, 00:28:11.858 "compare": false, 00:28:11.858 "compare_and_write": false, 00:28:11.858 "abort": false, 00:28:11.858 "seek_hole": true, 00:28:11.858 "seek_data": true, 00:28:11.858 "copy": false, 00:28:11.858 "nvme_iov_md": false 00:28:11.858 }, 00:28:11.858 "driver_specific": { 00:28:11.858 "lvol": { 00:28:11.858 "lvol_store_uuid": "faa5eff3-2ebf-4896-8525-94373363f2ae", 00:28:11.858 "base_bdev": "nvme0n1", 00:28:11.858 "thin_provision": true, 00:28:11.858 "num_allocated_clusters": 0, 00:28:11.859 "snapshot": false, 00:28:11.859 "clone": false, 00:28:11.859 "esnap_clone": false 00:28:11.859 } 00:28:11.859 } 00:28:11.859 } 00:28:11.859 ]' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c464a12e-75b8-465f-b387-13fabcd9d05e --l2p_dram_limit 10' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:11.859 11:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c464a12e-75b8-465f-b387-13fabcd9d05e --l2p_dram_limit 10 -c nvc0n1p0 00:28:12.118 [2024-12-10 11:36:34.256747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.256813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:12.118 [2024-12-10 11:36:34.256839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:12.118 [2024-12-10 11:36:34.256853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.256949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.256970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.118 [2024-12-10 11:36:34.256986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:12.118 [2024-12-10 11:36:34.256999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.257040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:12.118 [2024-12-10 11:36:34.258233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:12.118 [2024-12-10 11:36:34.258345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.258365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.118 [2024-12-10 11:36:34.258382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:28:12.118 [2024-12-10 11:36:34.258394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.258576] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a48a3d2f-9343-46ec-b5be-0bdd53e2eb48 00:28:12.118 [2024-12-10 11:36:34.259677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.259726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:12.118 [2024-12-10 11:36:34.259745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:12.118 [2024-12-10 11:36:34.259759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.264718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.264773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.118 [2024-12-10 11:36:34.264792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.896 ms 00:28:12.118 [2024-12-10 11:36:34.264806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.264925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.264948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.118 [2024-12-10 11:36:34.264962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:12.118 [2024-12-10 11:36:34.264980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.265100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.265124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:12.118 [2024-12-10 11:36:34.265139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:12.118 [2024-12-10 11:36:34.265152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.118 [2024-12-10 11:36:34.265184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:12.118 [2024-12-10 11:36:34.270003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.118 [2024-12-10 11:36:34.270055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.118 [2024-12-10 11:36:34.270077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:28:12.119 [2024-12-10 11:36:34.270090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.119 [2024-12-10 11:36:34.270149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.119 [2024-12-10 11:36:34.270167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:12.119 [2024-12-10 11:36:34.270183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:12.119 [2024-12-10 11:36:34.270195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.119 [2024-12-10 11:36:34.270258] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:12.119 [2024-12-10 11:36:34.270426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:12.119 [2024-12-10 11:36:34.270450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:12.119 [2024-12-10 11:36:34.270466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:12.119 [2024-12-10 11:36:34.270484] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:12.119 [2024-12-10 11:36:34.270498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:12.119 [2024-12-10 11:36:34.270512] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:12.119 [2024-12-10 11:36:34.270524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:12.119 [2024-12-10 11:36:34.270543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:12.119 [2024-12-10 11:36:34.270554] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:12.119 [2024-12-10 11:36:34.270569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.119 [2024-12-10 11:36:34.270608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:12.119 [2024-12-10 11:36:34.270638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:28:12.119 [2024-12-10 11:36:34.270667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.119 [2024-12-10 11:36:34.270807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.119 [2024-12-10 11:36:34.270823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:12.119 [2024-12-10 11:36:34.270839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:12.119 [2024-12-10 11:36:34.270851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.119 [2024-12-10 11:36:34.270966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:12.119 [2024-12-10 11:36:34.270983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:12.119 [2024-12-10 11:36:34.270998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:12.119 [2024-12-10 11:36:34.271036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:12.119 [2024-12-10 11:36:34.271074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.119 [2024-12-10 11:36:34.271100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:12.119 [2024-12-10 11:36:34.271113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:12.119 [2024-12-10 11:36:34.271126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.119 [2024-12-10 11:36:34.271137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:12.119 [2024-12-10 11:36:34.271150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:12.119 [2024-12-10 11:36:34.271161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:12.119 [2024-12-10 11:36:34.271188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:12.119 [2024-12-10 11:36:34.271225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:12.119 [2024-12-10 11:36:34.271260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:12.119 [2024-12-10 11:36:34.271296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:12.119 [2024-12-10 11:36:34.271331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:12.119 [2024-12-10 11:36:34.271370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.119 [2024-12-10 11:36:34.271395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:12.119 [2024-12-10 11:36:34.271406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:12.119 [2024-12-10 11:36:34.271421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.119 [2024-12-10 11:36:34.271431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:12.119 [2024-12-10 11:36:34.271445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:12.119 [2024-12-10 11:36:34.271455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:12.119 [2024-12-10 11:36:34.271479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:12.119 [2024-12-10 11:36:34.271493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:12.119 [2024-12-10 11:36:34.271517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:12.119 [2024-12-10 11:36:34.271540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.119 [2024-12-10 11:36:34.271568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:12.119 [2024-12-10 11:36:34.271584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:12.119 [2024-12-10 11:36:34.271595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:12.119 [2024-12-10 11:36:34.271608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:12.119 [2024-12-10 11:36:34.271619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:12.119 [2024-12-10 11:36:34.271647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:12.119 [2024-12-10 11:36:34.271662] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:12.119 [2024-12-10 11:36:34.271683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:12.119 [2024-12-10 11:36:34.271710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:12.119 [2024-12-10 11:36:34.271722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:12.119 [2024-12-10 11:36:34.271735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:12.119 [2024-12-10 11:36:34.271748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:12.119 [2024-12-10 11:36:34.271761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:12.119 [2024-12-10 11:36:34.271773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:12.119 [2024-12-10 11:36:34.271789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:12.119 [2024-12-10 11:36:34.271802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:12.119 [2024-12-10 11:36:34.271819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:12.119 [2024-12-10 11:36:34.271884] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:12.119 [2024-12-10 11:36:34.271899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:12.119 [2024-12-10 11:36:34.271927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:12.119 [2024-12-10 11:36:34.271939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:12.119 [2024-12-10 11:36:34.271954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:12.119 [2024-12-10 11:36:34.271968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.119 [2024-12-10 11:36:34.271982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:12.119 [2024-12-10 11:36:34.271995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:28:12.119 [2024-12-10 11:36:34.272021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.120 [2024-12-10 11:36:34.272078] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:12.120 [2024-12-10 11:36:34.272100] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:14.652 [2024-12-10 11:36:36.457340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.457674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:14.652 [2024-12-10 11:36:36.457828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2185.275 ms 00:28:14.652 [2024-12-10 11:36:36.457957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.486787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.487055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.652 [2024-12-10 11:36:36.487198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.547 ms 00:28:14.652 [2024-12-10 11:36:36.487324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.487555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.487624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:14.652 [2024-12-10 11:36:36.487749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:14.652 [2024-12-10 11:36:36.487870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.522602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.522847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.652 [2024-12-10 11:36:36.522972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.612 ms 00:28:14.652 [2024-12-10 11:36:36.523111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.523225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.523333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.652 [2024-12-10 11:36:36.523440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:14.652 [2024-12-10 11:36:36.523503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.524069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.524215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.652 [2024-12-10 11:36:36.524337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:28:14.652 [2024-12-10 11:36:36.524446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.524624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.524736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.652 [2024-12-10 11:36:36.524918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:14.652 [2024-12-10 11:36:36.524977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.540053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.540254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.652 [2024-12-10 11:36:36.540367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.019 ms 00:28:14.652 [2024-12-10 11:36:36.540420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.561765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:14.652 [2024-12-10 11:36:36.564564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.564807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:14.652 [2024-12-10 11:36:36.564931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.903 ms 00:28:14.652 [2024-12-10 11:36:36.565049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.621840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.622112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:14.652 [2024-12-10 11:36:36.622241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.671 ms 00:28:14.652 [2024-12-10 11:36:36.622264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.622479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.622515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:14.652 [2024-12-10 11:36:36.622531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:28:14.652 [2024-12-10 11:36:36.622541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.649055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.649095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:14.652 [2024-12-10 11:36:36.649147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.446 ms 00:28:14.652 [2024-12-10 11:36:36.649158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.673672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.673710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:14.652 [2024-12-10 11:36:36.673729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.460 ms 00:28:14.652 [2024-12-10 11:36:36.673739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.674306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.674328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:14.652 [2024-12-10 11:36:36.674343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:28:14.652 [2024-12-10 11:36:36.674354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.744133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.744385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:14.652 [2024-12-10 11:36:36.744423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.731 ms 00:28:14.652 [2024-12-10 11:36:36.744437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.770075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.770115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:14.652 [2024-12-10 11:36:36.770135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:28:14.652 [2024-12-10 11:36:36.770160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.652 [2024-12-10 11:36:36.794816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.652 [2024-12-10 11:36:36.794854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:14.652 [2024-12-10 11:36:36.794873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.611 ms 00:28:14.652 [2024-12-10 11:36:36.794883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.911 [2024-12-10 11:36:36.820589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.911 [2024-12-10 11:36:36.820657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:14.911 [2024-12-10 11:36:36.820693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.654 ms 00:28:14.911 [2024-12-10 11:36:36.820704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.911 [2024-12-10 11:36:36.820790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.911 [2024-12-10 11:36:36.820822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:14.912 [2024-12-10 11:36:36.820839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:14.912 [2024-12-10 11:36:36.820850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.912 [2024-12-10 11:36:36.820985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.912 [2024-12-10 11:36:36.821024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:14.912 [2024-12-10 11:36:36.821071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:14.912 [2024-12-10 11:36:36.821083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.912 [2024-12-10 11:36:36.822337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2564.868 ms, result 0 00:28:14.912 { 00:28:14.912 "name": "ftl0", 00:28:14.912 "uuid": "a48a3d2f-9343-46ec-b5be-0bdd53e2eb48" 00:28:14.912 } 00:28:14.912 11:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:14.912 11:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:15.172 11:36:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:15.172 11:36:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:15.172 11:36:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:15.431 /dev/nbd0 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:15.431 1+0 records in 00:28:15.431 1+0 records out 00:28:15.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257182 s, 15.9 MB/s 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:15.431 11:36:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:15.690 [2024-12-10 11:36:37.601112] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:15.690 [2024-12-10 11:36:37.601283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81460 ] 00:28:15.690 [2024-12-10 11:36:37.786404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.948 [2024-12-10 11:36:37.904995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.323  [2024-12-10T11:36:40.426Z] Copying: 176/1024 [MB] (176 MBps) [2024-12-10T11:36:41.363Z] Copying: 359/1024 [MB] (183 MBps) [2024-12-10T11:36:42.299Z] Copying: 543/1024 [MB] (184 MBps) [2024-12-10T11:36:43.234Z] Copying: 726/1024 [MB] (183 MBps) [2024-12-10T11:36:44.169Z] Copying: 897/1024 [MB] (170 MBps) [2024-12-10T11:36:45.105Z] Copying: 1024/1024 [MB] (average 178 MBps) 00:28:22.938 00:28:22.938 11:36:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:24.843 11:36:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:24.843 [2024-12-10 11:36:46.964161] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:24.843 [2024-12-10 11:36:46.964331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81559 ] 00:28:25.103 [2024-12-10 11:36:47.135586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.103 [2024-12-10 11:36:47.223862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.481  [2024-12-10T11:36:49.583Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-10T11:36:50.518Z] Copying: 28/1024 [MB] (13 MBps) [2024-12-10T11:36:51.896Z] Copying: 41/1024 [MB] (13 MBps) [2024-12-10T11:36:52.834Z] Copying: 54/1024 [MB] (13 MBps) [2024-12-10T11:36:53.769Z] Copying: 67/1024 [MB] (12 MBps) [2024-12-10T11:36:54.705Z] Copying: 80/1024 [MB] (12 MBps) [2024-12-10T11:36:55.639Z] Copying: 93/1024 [MB] (13 MBps) [2024-12-10T11:36:56.575Z] Copying: 106/1024 [MB] (13 MBps) [2024-12-10T11:36:57.511Z] Copying: 120/1024 [MB] (13 MBps) [2024-12-10T11:36:58.888Z] Copying: 134/1024 [MB] (13 MBps) [2024-12-10T11:36:59.861Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-10T11:37:00.797Z] Copying: 162/1024 [MB] (13 MBps) [2024-12-10T11:37:01.734Z] Copying: 175/1024 [MB] (12 MBps) [2024-12-10T11:37:02.671Z] Copying: 189/1024 [MB] (14 MBps) [2024-12-10T11:37:03.607Z] Copying: 204/1024 [MB] (14 MBps) [2024-12-10T11:37:04.544Z] Copying: 218/1024 [MB] (14 MBps) [2024-12-10T11:37:05.920Z] Copying: 233/1024 [MB] (14 MBps) [2024-12-10T11:37:06.488Z] Copying: 248/1024 [MB] (14 MBps) [2024-12-10T11:37:07.864Z] Copying: 262/1024 [MB] (14 MBps) [2024-12-10T11:37:08.801Z] Copying: 277/1024 [MB] (15 MBps) [2024-12-10T11:37:09.738Z] Copying: 292/1024 [MB] (14 MBps) [2024-12-10T11:37:10.674Z] Copying: 306/1024 [MB] (14 MBps) [2024-12-10T11:37:11.611Z] Copying: 321/1024 [MB] (14 MBps) [2024-12-10T11:37:12.546Z] Copying: 335/1024 [MB] (14 MBps) [2024-12-10T11:37:13.922Z] Copying: 350/1024 [MB] (14 MBps) [2024-12-10T11:37:14.489Z] Copying: 365/1024 [MB] (14 MBps) [2024-12-10T11:37:15.915Z] Copying: 379/1024 [MB] (14 MBps) [2024-12-10T11:37:16.852Z] Copying: 394/1024 [MB] (14 MBps) [2024-12-10T11:37:17.789Z] Copying: 409/1024 [MB] (14 MBps) [2024-12-10T11:37:18.725Z] Copying: 423/1024 [MB] (14 MBps) [2024-12-10T11:37:19.664Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-10T11:37:20.598Z] Copying: 453/1024 [MB] (14 MBps) [2024-12-10T11:37:21.533Z] Copying: 468/1024 [MB] (14 MBps) [2024-12-10T11:37:22.909Z] Copying: 482/1024 [MB] (14 MBps) [2024-12-10T11:37:23.844Z] Copying: 497/1024 [MB] (14 MBps) [2024-12-10T11:37:24.779Z] Copying: 512/1024 [MB] (14 MBps) [2024-12-10T11:37:25.713Z] Copying: 527/1024 [MB] (15 MBps) [2024-12-10T11:37:26.648Z] Copying: 543/1024 [MB] (15 MBps) [2024-12-10T11:37:27.584Z] Copying: 558/1024 [MB] (15 MBps) [2024-12-10T11:37:28.520Z] Copying: 573/1024 [MB] (15 MBps) [2024-12-10T11:37:29.896Z] Copying: 588/1024 [MB] (14 MBps) [2024-12-10T11:37:30.831Z] Copying: 602/1024 [MB] (14 MBps) [2024-12-10T11:37:31.768Z] Copying: 618/1024 [MB] (15 MBps) [2024-12-10T11:37:32.704Z] Copying: 634/1024 [MB] (15 MBps) [2024-12-10T11:37:33.639Z] Copying: 650/1024 [MB] (15 MBps) [2024-12-10T11:37:34.572Z] Copying: 665/1024 [MB] (15 MBps) [2024-12-10T11:37:35.507Z] Copying: 681/1024 [MB] (15 MBps) [2024-12-10T11:37:36.882Z] Copying: 697/1024 [MB] (15 MBps) [2024-12-10T11:37:37.817Z] Copying: 712/1024 [MB] (15 MBps) [2024-12-10T11:37:38.753Z] Copying: 728/1024 [MB] (15 MBps) [2024-12-10T11:37:39.688Z] Copying: 743/1024 [MB] (15 MBps) [2024-12-10T11:37:40.624Z] Copying: 759/1024 [MB] (15 MBps) [2024-12-10T11:37:41.557Z] Copying: 774/1024 [MB] (14 MBps) [2024-12-10T11:37:42.491Z] Copying: 789/1024 [MB] (15 MBps) [2024-12-10T11:37:43.867Z] Copying: 805/1024 [MB] (15 MBps) [2024-12-10T11:37:44.802Z] Copying: 820/1024 [MB] (15 MBps) [2024-12-10T11:37:45.738Z] Copying: 835/1024 [MB] (14 MBps) [2024-12-10T11:37:46.674Z] Copying: 850/1024 [MB] (15 MBps) [2024-12-10T11:37:47.649Z] Copying: 865/1024 [MB] (14 MBps) [2024-12-10T11:37:48.585Z] Copying: 880/1024 [MB] (14 MBps) [2024-12-10T11:37:49.520Z] Copying: 895/1024 [MB] (15 MBps) [2024-12-10T11:37:50.895Z] Copying: 910/1024 [MB] (15 MBps) [2024-12-10T11:37:51.831Z] Copying: 925/1024 [MB] (15 MBps) [2024-12-10T11:37:52.766Z] Copying: 941/1024 [MB] (15 MBps) [2024-12-10T11:37:53.701Z] Copying: 956/1024 [MB] (15 MBps) [2024-12-10T11:37:54.637Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-10T11:37:55.572Z] Copying: 986/1024 [MB] (15 MBps) [2024-12-10T11:37:56.506Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-10T11:37:57.073Z] Copying: 1016/1024 [MB] (14 MBps) [2024-12-10T11:37:58.008Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:29:35.841 00:29:35.841 11:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:35.841 11:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:36.100 11:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:36.359 [2024-12-10 11:37:58.466647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.359 [2024-12-10 11:37:58.466719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:36.359 [2024-12-10 11:37:58.466740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:36.359 [2024-12-10 11:37:58.466753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.359 [2024-12-10 11:37:58.466791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:36.359 [2024-12-10 11:37:58.470219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.359 [2024-12-10 11:37:58.470257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:36.359 [2024-12-10 11:37:58.470292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.397 ms 00:29:36.359 [2024-12-10 11:37:58.470304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.359 [2024-12-10 11:37:58.472238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.359 [2024-12-10 11:37:58.472474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:36.359 [2024-12-10 11:37:58.472534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:29:36.359 [2024-12-10 11:37:58.472575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.359 [2024-12-10 11:37:58.489460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.359 [2024-12-10 11:37:58.489525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:36.359 [2024-12-10 11:37:58.489565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.811 ms 00:29:36.359 [2024-12-10 11:37:58.489577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.359 [2024-12-10 11:37:58.496795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.359 [2024-12-10 11:37:58.496847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:36.359 [2024-12-10 11:37:58.496884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.165 ms 00:29:36.359 [2024-12-10 11:37:58.496896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.529704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.529781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:36.619 [2024-12-10 11:37:58.529804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.698 ms 00:29:36.619 [2024-12-10 11:37:58.529816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.547547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.547593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:36.619 [2024-12-10 11:37:58.547633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.675 ms 00:29:36.619 [2024-12-10 11:37:58.547675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.547871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.547893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:36.619 [2024-12-10 11:37:58.547924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:36.619 [2024-12-10 11:37:58.547940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.577348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.577394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:36.619 [2024-12-10 11:37:58.577432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.365 ms 00:29:36.619 [2024-12-10 11:37:58.577443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.607610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.607863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:36.619 [2024-12-10 11:37:58.607903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.072 ms 00:29:36.619 [2024-12-10 11:37:58.607926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.639887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.639930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:36.619 [2024-12-10 11:37:58.639967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.859 ms 00:29:36.619 [2024-12-10 11:37:58.639979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.670528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.619 [2024-12-10 11:37:58.670746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:36.619 [2024-12-10 11:37:58.670792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.380 ms 00:29:36.619 [2024-12-10 11:37:58.670817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.619 [2024-12-10 11:37:58.670892] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:36.619 [2024-12-10 11:37:58.670932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.670964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.670990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:36.619 [2024-12-10 11:37:58.671269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.671996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:36.620 [2024-12-10 11:37:58.672465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:36.620 [2024-12-10 11:37:58.672477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a48a3d2f-9343-46ec-b5be-0bdd53e2eb48 00:29:36.620 [2024-12-10 11:37:58.672488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:36.620 [2024-12-10 11:37:58.672502] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:36.620 [2024-12-10 11:37:58.672513] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:36.620 [2024-12-10 11:37:58.672525] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:36.620 [2024-12-10 11:37:58.672536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:36.620 [2024-12-10 11:37:58.672548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:36.620 [2024-12-10 11:37:58.672558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:36.620 [2024-12-10 11:37:58.672569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:36.620 [2024-12-10 11:37:58.672579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:36.620 [2024-12-10 11:37:58.672592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.620 [2024-12-10 11:37:58.672603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:36.620 [2024-12-10 11:37:58.672616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:29:36.620 [2024-12-10 11:37:58.672627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.689577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.621 [2024-12-10 11:37:58.689857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:36.621 [2024-12-10 11:37:58.689910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.883 ms 00:29:36.621 [2024-12-10 11:37:58.689938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.690485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.621 [2024-12-10 11:37:58.690512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:36.621 [2024-12-10 11:37:58.690543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:29:36.621 [2024-12-10 11:37:58.690570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.744990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.621 [2024-12-10 11:37:58.745247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:36.621 [2024-12-10 11:37:58.745296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.621 [2024-12-10 11:37:58.745317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.745437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.621 [2024-12-10 11:37:58.745465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:36.621 [2024-12-10 11:37:58.745496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.621 [2024-12-10 11:37:58.745536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.745755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.621 [2024-12-10 11:37:58.745781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:36.621 [2024-12-10 11:37:58.745797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.621 [2024-12-10 11:37:58.745809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.621 [2024-12-10 11:37:58.745843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.621 [2024-12-10 11:37:58.745874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:36.621 [2024-12-10 11:37:58.745887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.621 [2024-12-10 11:37:58.745913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.852136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.852207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:36.880 [2024-12-10 11:37:58.852232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.852245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:36.880 [2024-12-10 11:37:58.940202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.940215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:36.880 [2024-12-10 11:37:58.940422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.940433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:36.880 [2024-12-10 11:37:58.940542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.940553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:36.880 [2024-12-10 11:37:58.940774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.940789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:36.880 [2024-12-10 11:37:58.940893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.940906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.940956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.940973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:36.880 [2024-12-10 11:37:58.940988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.941002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.941090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.880 [2024-12-10 11:37:58.941188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:36.880 [2024-12-10 11:37:58.941231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.880 [2024-12-10 11:37:58.941254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.880 [2024-12-10 11:37:58.941540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.807 ms, result 0 00:29:36.880 true 00:29:36.880 11:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81317 00:29:36.880 11:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81317 00:29:36.880 11:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:37.139 [2024-12-10 11:37:59.072483] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:29:37.139 [2024-12-10 11:37:59.072678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82263 ] 00:29:37.139 [2024-12-10 11:37:59.250945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.398 [2024-12-10 11:37:59.343929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.774  [2024-12-10T11:38:01.877Z] Copying: 175/1024 [MB] (175 MBps) [2024-12-10T11:38:02.871Z] Copying: 338/1024 [MB] (163 MBps) [2024-12-10T11:38:03.807Z] Copying: 497/1024 [MB] (158 MBps) [2024-12-10T11:38:04.742Z] Copying: 661/1024 [MB] (163 MBps) [2024-12-10T11:38:05.679Z] Copying: 839/1024 [MB] (177 MBps) [2024-12-10T11:38:05.679Z] Copying: 1013/1024 [MB] (174 MBps) [2024-12-10T11:38:06.616Z] Copying: 1024/1024 [MB] (average 169 MBps) 00:29:44.449 00:29:44.708 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81317 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:44.708 11:38:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:44.708 [2024-12-10 11:38:06.718768] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:29:44.708 [2024-12-10 11:38:06.718940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82346 ] 00:29:44.966 [2024-12-10 11:38:06.901263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.966 [2024-12-10 11:38:06.995547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.225 [2024-12-10 11:38:07.293142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:45.225 [2024-12-10 11:38:07.293254] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:45.225 [2024-12-10 11:38:07.359361] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:45.225 [2024-12-10 11:38:07.359849] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:45.225 [2024-12-10 11:38:07.360173] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:45.484 [2024-12-10 11:38:07.635271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.484 [2024-12-10 11:38:07.635331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:45.484 [2024-12-10 11:38:07.635351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:45.484 [2024-12-10 11:38:07.635369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.484 [2024-12-10 11:38:07.635447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.484 [2024-12-10 11:38:07.635468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:45.484 [2024-12-10 11:38:07.635481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:45.484 [2024-12-10 11:38:07.635492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.484 [2024-12-10 11:38:07.635525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:45.484 [2024-12-10 11:38:07.636584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:45.484 [2024-12-10 11:38:07.636652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.484 [2024-12-10 11:38:07.636680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:45.484 [2024-12-10 11:38:07.636703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:29:45.484 [2024-12-10 11:38:07.636723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.484 [2024-12-10 11:38:07.638218] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:45.744 [2024-12-10 11:38:07.655910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.744 [2024-12-10 11:38:07.655959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:45.744 [2024-12-10 11:38:07.655979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.697 ms 00:29:45.744 [2024-12-10 11:38:07.656003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.744 [2024-12-10 11:38:07.656115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.744 [2024-12-10 11:38:07.656138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:45.744 [2024-12-10 11:38:07.656152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:45.744 [2024-12-10 11:38:07.656163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.744 [2024-12-10 11:38:07.661260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.744 [2024-12-10 11:38:07.661478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:45.744 [2024-12-10 11:38:07.661659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:29:45.744 [2024-12-10 11:38:07.661821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.744 [2024-12-10 11:38:07.662144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.744 [2024-12-10 11:38:07.662312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:45.744 [2024-12-10 11:38:07.662357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:29:45.744 [2024-12-10 11:38:07.662383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.744 [2024-12-10 11:38:07.662480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.744 [2024-12-10 11:38:07.662500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:45.744 [2024-12-10 11:38:07.662514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:45.744 [2024-12-10 11:38:07.662525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.745 [2024-12-10 11:38:07.662577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:45.745 [2024-12-10 11:38:07.667045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.745 [2024-12-10 11:38:07.667118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:45.745 [2024-12-10 11:38:07.667150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:29:45.745 [2024-12-10 11:38:07.667162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.745 [2024-12-10 11:38:07.667210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.745 [2024-12-10 11:38:07.667228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:45.745 [2024-12-10 11:38:07.667240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:45.745 [2024-12-10 11:38:07.667252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.745 [2024-12-10 11:38:07.667335] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:45.745 [2024-12-10 11:38:07.667371] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:45.745 [2024-12-10 11:38:07.667418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:45.745 [2024-12-10 11:38:07.667438] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:45.745 [2024-12-10 11:38:07.667550] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:45.745 [2024-12-10 11:38:07.667567] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:45.745 [2024-12-10 11:38:07.667582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:45.745 [2024-12-10 11:38:07.667602] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:45.745 [2024-12-10 11:38:07.667616] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:45.745 [2024-12-10 11:38:07.667628] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:45.745 [2024-12-10 11:38:07.667639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:45.745 [2024-12-10 11:38:07.667707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:45.745 [2024-12-10 11:38:07.667734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:45.745 [2024-12-10 11:38:07.667747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.745 [2024-12-10 11:38:07.667759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:45.745 [2024-12-10 11:38:07.667770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:29:45.745 [2024-12-10 11:38:07.667798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.745 [2024-12-10 11:38:07.667921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.745 [2024-12-10 11:38:07.667978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:45.745 [2024-12-10 11:38:07.668005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:45.745 [2024-12-10 11:38:07.668025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.745 [2024-12-10 11:38:07.668189] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:45.745 [2024-12-10 11:38:07.668224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:45.745 [2024-12-10 11:38:07.668248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:45.745 [2024-12-10 11:38:07.668311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:45.745 [2024-12-10 11:38:07.668373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.745 [2024-12-10 11:38:07.668439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:45.745 [2024-12-10 11:38:07.668461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:45.745 [2024-12-10 11:38:07.668482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.745 [2024-12-10 11:38:07.668500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:45.745 [2024-12-10 11:38:07.668526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:45.745 [2024-12-10 11:38:07.668545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:45.745 [2024-12-10 11:38:07.668585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:45.745 [2024-12-10 11:38:07.668673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:45.745 [2024-12-10 11:38:07.668736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:45.745 [2024-12-10 11:38:07.668796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:45.745 [2024-12-10 11:38:07.668854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.745 [2024-12-10 11:38:07.668892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:45.745 [2024-12-10 11:38:07.668914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:45.745 [2024-12-10 11:38:07.668935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.745 [2024-12-10 11:38:07.668955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:45.745 [2024-12-10 11:38:07.668976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:45.745 [2024-12-10 11:38:07.668996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.745 [2024-12-10 11:38:07.669016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:45.745 [2024-12-10 11:38:07.669036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:45.745 [2024-12-10 11:38:07.669055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.669073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:45.745 [2024-12-10 11:38:07.669093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:45.745 [2024-12-10 11:38:07.669113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.669133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:45.745 [2024-12-10 11:38:07.669155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:45.745 [2024-12-10 11:38:07.669189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.745 [2024-12-10 11:38:07.669213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.745 [2024-12-10 11:38:07.669236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:45.745 [2024-12-10 11:38:07.669258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:45.745 [2024-12-10 11:38:07.669280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:45.745 [2024-12-10 11:38:07.669301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:45.745 [2024-12-10 11:38:07.669320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:45.745 [2024-12-10 11:38:07.669340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:45.745 [2024-12-10 11:38:07.669363] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:45.745 [2024-12-10 11:38:07.669389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:45.745 [2024-12-10 11:38:07.669435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:45.745 [2024-12-10 11:38:07.669458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:45.745 [2024-12-10 11:38:07.669483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:45.745 [2024-12-10 11:38:07.669504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:45.745 [2024-12-10 11:38:07.669526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:45.745 [2024-12-10 11:38:07.669547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:45.745 [2024-12-10 11:38:07.669575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:45.745 [2024-12-10 11:38:07.669595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:45.745 [2024-12-10 11:38:07.669615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:45.745 [2024-12-10 11:38:07.669757] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:45.745 [2024-12-10 11:38:07.669781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.745 [2024-12-10 11:38:07.669804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:45.746 [2024-12-10 11:38:07.669825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:45.746 [2024-12-10 11:38:07.669846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:45.746 [2024-12-10 11:38:07.669867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:45.746 [2024-12-10 11:38:07.669889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.669912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:45.746 [2024-12-10 11:38:07.669934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:29:45.746 [2024-12-10 11:38:07.669957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.704139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.704203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:45.746 [2024-12-10 11:38:07.704225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.062 ms 00:29:45.746 [2024-12-10 11:38:07.704239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.704367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.704385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:45.746 [2024-12-10 11:38:07.704398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:45.746 [2024-12-10 11:38:07.704410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.764731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.764795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:45.746 [2024-12-10 11:38:07.764822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.220 ms 00:29:45.746 [2024-12-10 11:38:07.764836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.764917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.764935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:45.746 [2024-12-10 11:38:07.764948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:45.746 [2024-12-10 11:38:07.764959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.765397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.765418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:45.746 [2024-12-10 11:38:07.765433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:29:45.746 [2024-12-10 11:38:07.765450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.765618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.765659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:45.746 [2024-12-10 11:38:07.765673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:29:45.746 [2024-12-10 11:38:07.765685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.783277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.783512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:45.746 [2024-12-10 11:38:07.783560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.559 ms 00:29:45.746 [2024-12-10 11:38:07.783586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.800846] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:45.746 [2024-12-10 11:38:07.800905] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:45.746 [2024-12-10 11:38:07.800927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.800941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:45.746 [2024-12-10 11:38:07.800956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.075 ms 00:29:45.746 [2024-12-10 11:38:07.800968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.832098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.832338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:45.746 [2024-12-10 11:38:07.832521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.036 ms 00:29:45.746 [2024-12-10 11:38:07.832727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.849535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.849808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:45.746 [2024-12-10 11:38:07.849855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.551 ms 00:29:45.746 [2024-12-10 11:38:07.849879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.866300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.866521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:45.746 [2024-12-10 11:38:07.866712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.330 ms 00:29:45.746 [2024-12-10 11:38:07.866914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.746 [2024-12-10 11:38:07.868027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.746 [2024-12-10 11:38:07.868221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:45.746 [2024-12-10 11:38:07.868386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:29:45.746 [2024-12-10 11:38:07.868563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.944148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.944446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:46.005 [2024-12-10 11:38:07.944626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.490 ms 00:29:46.005 [2024-12-10 11:38:07.944810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.958367] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:46.005 [2024-12-10 11:38:07.961774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.961952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:46.005 [2024-12-10 11:38:07.962120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.657 ms 00:29:46.005 [2024-12-10 11:38:07.962294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.962586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.962777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:46.005 [2024-12-10 11:38:07.962947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:46.005 [2024-12-10 11:38:07.963135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.963433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.963583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:46.005 [2024-12-10 11:38:07.963776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:46.005 [2024-12-10 11:38:07.963955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.964064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.964109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:46.005 [2024-12-10 11:38:07.964136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:46.005 [2024-12-10 11:38:07.964160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.964254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:46.005 [2024-12-10 11:38:07.964276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.964288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:46.005 [2024-12-10 11:38:07.964301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:46.005 [2024-12-10 11:38:07.964318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.997170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.997235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:46.005 [2024-12-10 11:38:07.997255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.801 ms 00:29:46.005 [2024-12-10 11:38:07.997267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.997368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.005 [2024-12-10 11:38:07.997386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:46.005 [2024-12-10 11:38:07.997399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:46.005 [2024-12-10 11:38:07.997410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.005 [2024-12-10 11:38:07.998986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.888 ms, result 0 00:29:46.942  [2024-12-10T11:38:10.045Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-10T11:38:11.422Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-10T11:38:12.358Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-10T11:38:13.296Z] Copying: 97/1024 [MB] (23 MBps) [2024-12-10T11:38:14.232Z] Copying: 120/1024 [MB] (22 MBps) [2024-12-10T11:38:15.227Z] Copying: 142/1024 [MB] (22 MBps) [2024-12-10T11:38:16.163Z] Copying: 166/1024 [MB] (23 MBps) [2024-12-10T11:38:17.099Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-10T11:38:18.035Z] Copying: 212/1024 [MB] (23 MBps) [2024-12-10T11:38:19.411Z] Copying: 236/1024 [MB] (23 MBps) [2024-12-10T11:38:20.348Z] Copying: 260/1024 [MB] (24 MBps) [2024-12-10T11:38:21.283Z] Copying: 284/1024 [MB] (23 MBps) [2024-12-10T11:38:22.218Z] Copying: 307/1024 [MB] (23 MBps) [2024-12-10T11:38:23.156Z] Copying: 332/1024 [MB] (24 MBps) [2024-12-10T11:38:24.092Z] Copying: 355/1024 [MB] (23 MBps) [2024-12-10T11:38:25.028Z] Copying: 379/1024 [MB] (23 MBps) [2024-12-10T11:38:26.404Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-10T11:38:27.340Z] Copying: 424/1024 [MB] (23 MBps) [2024-12-10T11:38:28.294Z] Copying: 447/1024 [MB] (23 MBps) [2024-12-10T11:38:29.230Z] Copying: 471/1024 [MB] (23 MBps) [2024-12-10T11:38:30.166Z] Copying: 495/1024 [MB] (23 MBps) [2024-12-10T11:38:31.101Z] Copying: 519/1024 [MB] (24 MBps) [2024-12-10T11:38:32.037Z] Copying: 543/1024 [MB] (23 MBps) [2024-12-10T11:38:33.414Z] Copying: 567/1024 [MB] (24 MBps) [2024-12-10T11:38:34.349Z] Copying: 591/1024 [MB] (23 MBps) [2024-12-10T11:38:35.284Z] Copying: 615/1024 [MB] (24 MBps) [2024-12-10T11:38:36.219Z] Copying: 639/1024 [MB] (24 MBps) [2024-12-10T11:38:37.155Z] Copying: 663/1024 [MB] (23 MBps) [2024-12-10T11:38:38.093Z] Copying: 687/1024 [MB] (23 MBps) [2024-12-10T11:38:39.029Z] Copying: 711/1024 [MB] (24 MBps) [2024-12-10T11:38:40.406Z] Copying: 735/1024 [MB] (24 MBps) [2024-12-10T11:38:41.343Z] Copying: 759/1024 [MB] (24 MBps) [2024-12-10T11:38:42.279Z] Copying: 783/1024 [MB] (23 MBps) [2024-12-10T11:38:43.215Z] Copying: 807/1024 [MB] (24 MBps) [2024-12-10T11:38:44.175Z] Copying: 831/1024 [MB] (23 MBps) [2024-12-10T11:38:45.110Z] Copying: 854/1024 [MB] (23 MBps) [2024-12-10T11:38:46.046Z] Copying: 878/1024 [MB] (24 MBps) [2024-12-10T11:38:47.423Z] Copying: 903/1024 [MB] (24 MBps) [2024-12-10T11:38:48.361Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-10T11:38:49.297Z] Copying: 949/1024 [MB] (22 MBps) [2024-12-10T11:38:50.234Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-10T11:38:51.170Z] Copying: 996/1024 [MB] (23 MBps) [2024-12-10T11:38:52.107Z] Copying: 1020/1024 [MB] (23 MBps) [2024-12-10T11:38:52.674Z] Copying: 1048272/1048576 [kB] (3696 kBps) [2024-12-10T11:38:52.674Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 11:38:52.427360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.507 [2024-12-10 11:38:52.427489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:30.507 [2024-12-10 11:38:52.427529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:30.507 [2024-12-10 11:38:52.427541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.507 [2024-12-10 11:38:52.431320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:30.507 [2024-12-10 11:38:52.438145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.507 [2024-12-10 11:38:52.438329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:30.507 [2024-12-10 11:38:52.438359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.757 ms 00:30:30.507 [2024-12-10 11:38:52.438380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.507 [2024-12-10 11:38:52.451056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.507 [2024-12-10 11:38:52.451118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:30.507 [2024-12-10 11:38:52.451154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.371 ms 00:30:30.507 [2024-12-10 11:38:52.451166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.507 [2024-12-10 11:38:52.473862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.507 [2024-12-10 11:38:52.473922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:30.507 [2024-12-10 11:38:52.473956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.673 ms 00:30:30.507 [2024-12-10 11:38:52.473967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.507 [2024-12-10 11:38:52.480764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.508 [2024-12-10 11:38:52.480794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:30.508 [2024-12-10 11:38:52.480823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:30:30.508 [2024-12-10 11:38:52.480833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.508 [2024-12-10 11:38:52.512614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.508 [2024-12-10 11:38:52.512677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:30.508 [2024-12-10 11:38:52.512710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.747 ms 00:30:30.508 [2024-12-10 11:38:52.512720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.508 [2024-12-10 11:38:52.530307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.508 [2024-12-10 11:38:52.530554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:30.508 [2024-12-10 11:38:52.530581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.546 ms 00:30:30.508 [2024-12-10 11:38:52.530595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.508 [2024-12-10 11:38:52.650509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.508 [2024-12-10 11:38:52.650574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:30.508 [2024-12-10 11:38:52.650614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.857 ms 00:30:30.508 [2024-12-10 11:38:52.650625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.768 [2024-12-10 11:38:52.682574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.768 [2024-12-10 11:38:52.682957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:30.768 [2024-12-10 11:38:52.682990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.895 ms 00:30:30.768 [2024-12-10 11:38:52.683034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.768 [2024-12-10 11:38:52.716831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.768 [2024-12-10 11:38:52.717101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:30.768 [2024-12-10 11:38:52.717129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.716 ms 00:30:30.768 [2024-12-10 11:38:52.717142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.768 [2024-12-10 11:38:52.749189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.768 [2024-12-10 11:38:52.749246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:30.768 [2024-12-10 11:38:52.749271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.001 ms 00:30:30.768 [2024-12-10 11:38:52.749282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.768 [2024-12-10 11:38:52.779892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.768 [2024-12-10 11:38:52.780108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:30.768 [2024-12-10 11:38:52.780136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.519 ms 00:30:30.768 [2024-12-10 11:38:52.780149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.768 [2024-12-10 11:38:52.780196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:30.768 [2024-12-10 11:38:52.780229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:30:30.768 [2024-12-10 11:38:52.780244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.780985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.781011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.781022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.781033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.781060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:30.768 [2024-12-10 11:38:52.781072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:30.769 [2024-12-10 11:38:52.781565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:30.769 [2024-12-10 11:38:52.781576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a48a3d2f-9343-46ec-b5be-0bdd53e2eb48 00:30:30.769 [2024-12-10 11:38:52.781601] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:30:30.769 [2024-12-10 11:38:52.781612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:30:30.769 [2024-12-10 11:38:52.781622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:30:30.769 [2024-12-10 11:38:52.781669] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:30:30.769 [2024-12-10 11:38:52.781679] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:30.769 [2024-12-10 11:38:52.781689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:30.769 [2024-12-10 11:38:52.781700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:30.769 [2024-12-10 11:38:52.781709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:30.769 [2024-12-10 11:38:52.781718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:30.769 [2024-12-10 11:38:52.781729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.769 [2024-12-10 11:38:52.781741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:30.769 [2024-12-10 11:38:52.781751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:30:30.769 [2024-12-10 11:38:52.781762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.798477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.769 [2024-12-10 11:38:52.798525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:30.769 [2024-12-10 11:38:52.798555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.662 ms 00:30:30.769 [2024-12-10 11:38:52.798580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.799097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.769 [2024-12-10 11:38:52.799128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:30.769 [2024-12-10 11:38:52.799150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:30:30.769 [2024-12-10 11:38:52.799161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.839204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.769 [2024-12-10 11:38:52.839248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:30.769 [2024-12-10 11:38:52.839279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.769 [2024-12-10 11:38:52.839290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.839348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.769 [2024-12-10 11:38:52.839362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:30.769 [2024-12-10 11:38:52.839380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.769 [2024-12-10 11:38:52.839390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.839525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.769 [2024-12-10 11:38:52.839542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:30.769 [2024-12-10 11:38:52.839552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.769 [2024-12-10 11:38:52.839561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.839581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.769 [2024-12-10 11:38:52.839593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:30.769 [2024-12-10 11:38:52.839602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.769 [2024-12-10 11:38:52.839611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.769 [2024-12-10 11:38:52.922833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.769 [2024-12-10 11:38:52.922893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:30.769 [2024-12-10 11:38:52.922925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.769 [2024-12-10 11:38:52.922935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.992244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.992522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:31.028 [2024-12-10 11:38:52.992550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.992569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.992640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.992679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:31.028 [2024-12-10 11:38:52.992691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.992701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.992768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.992785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:31.028 [2024-12-10 11:38:52.992796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.992806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.992934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.992952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:31.028 [2024-12-10 11:38:52.992962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.992972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.993015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.993031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:31.028 [2024-12-10 11:38:52.993042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.993066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.993127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.993141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:31.028 [2024-12-10 11:38:52.993150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.993159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.993205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.028 [2024-12-10 11:38:52.993221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:31.028 [2024-12-10 11:38:52.993231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.028 [2024-12-10 11:38:52.993240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.028 [2024-12-10 11:38:52.993368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.364 ms, result 0 00:30:32.402 00:30:32.402 00:30:32.402 11:38:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:34.301 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:34.301 [2024-12-10 11:38:56.205436] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:34.301 [2024-12-10 11:38:56.205599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82823 ] 00:30:34.301 [2024-12-10 11:38:56.393472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.560 [2024-12-10 11:38:56.512006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.820 [2024-12-10 11:38:56.791207] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:34.820 [2024-12-10 11:38:56.791281] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:34.820 [2024-12-10 11:38:56.947297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.947345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:34.820 [2024-12-10 11:38:56.947379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:34.820 [2024-12-10 11:38:56.947388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.947444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.947463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:34.820 [2024-12-10 11:38:56.947474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:34.820 [2024-12-10 11:38:56.947482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.947509] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:34.820 [2024-12-10 11:38:56.948461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:34.820 [2024-12-10 11:38:56.948716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.948736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:34.820 [2024-12-10 11:38:56.948749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:30:34.820 [2024-12-10 11:38:56.948759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.949802] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:34.820 [2024-12-10 11:38:56.963166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.963204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:34.820 [2024-12-10 11:38:56.963235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.365 ms 00:30:34.820 [2024-12-10 11:38:56.963244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.963312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.963328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:34.820 [2024-12-10 11:38:56.963339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:34.820 [2024-12-10 11:38:56.963348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.967707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.967917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:34.820 [2024-12-10 11:38:56.967942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:30:34.820 [2024-12-10 11:38:56.967960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.968045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.968088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:34.820 [2024-12-10 11:38:56.968101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:34.820 [2024-12-10 11:38:56.968111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.968166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.968182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:34.820 [2024-12-10 11:38:56.968194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:34.820 [2024-12-10 11:38:56.968205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.820 [2024-12-10 11:38:56.968256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:34.820 [2024-12-10 11:38:56.971931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.820 [2024-12-10 11:38:56.971962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:34.821 [2024-12-10 11:38:56.971995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:30:34.821 [2024-12-10 11:38:56.972004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.821 [2024-12-10 11:38:56.972039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.821 [2024-12-10 11:38:56.972052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:34.821 [2024-12-10 11:38:56.972088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:34.821 [2024-12-10 11:38:56.972098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.821 [2024-12-10 11:38:56.972142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:34.821 [2024-12-10 11:38:56.972171] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:34.821 [2024-12-10 11:38:56.972210] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:34.821 [2024-12-10 11:38:56.972232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:34.821 [2024-12-10 11:38:56.972330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:34.821 [2024-12-10 11:38:56.972344] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:34.821 [2024-12-10 11:38:56.972357] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:34.821 [2024-12-10 11:38:56.972370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:34.821 [2024-12-10 11:38:56.972430] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:34.821 [2024-12-10 11:38:56.972442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:34.821 [2024-12-10 11:38:56.972467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:34.821 [2024-12-10 11:38:56.972477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.821 [2024-12-10 11:38:56.972486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:34.821 [2024-12-10 11:38:56.972495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:30:34.821 [2024-12-10 11:38:56.972504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.821 [2024-12-10 11:38:56.972586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.821 [2024-12-10 11:38:56.972599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:34.821 [2024-12-10 11:38:56.972609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:34.821 [2024-12-10 11:38:56.972618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.821 [2024-12-10 11:38:56.972746] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:34.821 [2024-12-10 11:38:56.972764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:34.821 [2024-12-10 11:38:56.972775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:34.821 [2024-12-10 11:38:56.972803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:34.821 [2024-12-10 11:38:56.972834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:34.821 [2024-12-10 11:38:56.972851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:34.821 [2024-12-10 11:38:56.972859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:34.821 [2024-12-10 11:38:56.972867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:34.821 [2024-12-10 11:38:56.972888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:34.821 [2024-12-10 11:38:56.972897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:34.821 [2024-12-10 11:38:56.972906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:34.821 [2024-12-10 11:38:56.972923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:34.821 [2024-12-10 11:38:56.972948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:34.821 [2024-12-10 11:38:56.972973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:34.821 [2024-12-10 11:38:56.972981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.821 [2024-12-10 11:38:56.972990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:34.821 [2024-12-10 11:38:56.972998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.821 [2024-12-10 11:38:56.973014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:34.821 [2024-12-10 11:38:56.973023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.821 [2024-12-10 11:38:56.973040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:34.821 [2024-12-10 11:38:56.973049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:34.821 [2024-12-10 11:38:56.973081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:34.821 [2024-12-10 11:38:56.973089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:34.821 [2024-12-10 11:38:56.973096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:34.821 [2024-12-10 11:38:56.973105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:34.821 [2024-12-10 11:38:56.973113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:34.821 [2024-12-10 11:38:56.973122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:34.821 [2024-12-10 11:38:56.973139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:34.821 [2024-12-10 11:38:56.973147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973156] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:34.821 [2024-12-10 11:38:56.973166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:34.821 [2024-12-10 11:38:56.973174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:34.821 [2024-12-10 11:38:56.973183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.821 [2024-12-10 11:38:56.973193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:34.821 [2024-12-10 11:38:56.973202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:34.821 [2024-12-10 11:38:56.973210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:34.821 [2024-12-10 11:38:56.973218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:34.821 [2024-12-10 11:38:56.973226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:34.821 [2024-12-10 11:38:56.973234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:34.821 [2024-12-10 11:38:56.973244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:34.821 [2024-12-10 11:38:56.973255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:34.821 [2024-12-10 11:38:56.973279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:34.821 [2024-12-10 11:38:56.973288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:34.821 [2024-12-10 11:38:56.973297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:34.821 [2024-12-10 11:38:56.973306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:34.821 [2024-12-10 11:38:56.973315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:34.821 [2024-12-10 11:38:56.973324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:34.821 [2024-12-10 11:38:56.973332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:34.821 [2024-12-10 11:38:56.973341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:34.821 [2024-12-10 11:38:56.973350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:34.821 [2024-12-10 11:38:56.973395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:34.821 [2024-12-10 11:38:56.973405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:34.821 [2024-12-10 11:38:56.973425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:34.821 [2024-12-10 11:38:56.973434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:34.822 [2024-12-10 11:38:56.973443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:34.822 [2024-12-10 11:38:56.973453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.822 [2024-12-10 11:38:56.973462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:34.822 [2024-12-10 11:38:56.973472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:30:34.822 [2024-12-10 11:38:56.973480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.005189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.005244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.081 [2024-12-10 11:38:57.005279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.638 ms 00:30:35.081 [2024-12-10 11:38:57.005309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.005454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.005468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:35.081 [2024-12-10 11:38:57.005479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:35.081 [2024-12-10 11:38:57.005488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.045816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.045913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:35.081 [2024-12-10 11:38:57.045948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.238 ms 00:30:35.081 [2024-12-10 11:38:57.045960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.046023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.046053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:35.081 [2024-12-10 11:38:57.046072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:35.081 [2024-12-10 11:38:57.046098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.046507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.046540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:35.081 [2024-12-10 11:38:57.046551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:30:35.081 [2024-12-10 11:38:57.046561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.046731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.046750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:35.081 [2024-12-10 11:38:57.046824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:30:35.081 [2024-12-10 11:38:57.046836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.062786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.062825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:35.081 [2024-12-10 11:38:57.062857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:30:35.081 [2024-12-10 11:38:57.062867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.076803] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:35.081 [2024-12-10 11:38:57.076839] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:35.081 [2024-12-10 11:38:57.076872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.076882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:35.081 [2024-12-10 11:38:57.076892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.894 ms 00:30:35.081 [2024-12-10 11:38:57.076901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.081 [2024-12-10 11:38:57.103012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.081 [2024-12-10 11:38:57.103217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:35.082 [2024-12-10 11:38:57.103260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:30:35.082 [2024-12-10 11:38:57.103272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.117731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.117767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:35.082 [2024-12-10 11:38:57.117796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.413 ms 00:30:35.082 [2024-12-10 11:38:57.117805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.131190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.131225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:35.082 [2024-12-10 11:38:57.131255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.346 ms 00:30:35.082 [2024-12-10 11:38:57.131264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.131978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.132039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:35.082 [2024-12-10 11:38:57.132084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:30:35.082 [2024-12-10 11:38:57.132096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.196888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.196955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:35.082 [2024-12-10 11:38:57.196997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.765 ms 00:30:35.082 [2024-12-10 11:38:57.197007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.208711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:35.082 [2024-12-10 11:38:57.211181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.211215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:35.082 [2024-12-10 11:38:57.211232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.096 ms 00:30:35.082 [2024-12-10 11:38:57.211244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.211347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.211367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:35.082 [2024-12-10 11:38:57.211384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:35.082 [2024-12-10 11:38:57.211395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.213233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.213311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:35.082 [2024-12-10 11:38:57.213360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:30:35.082 [2024-12-10 11:38:57.213491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.213564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.213772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:35.082 [2024-12-10 11:38:57.213829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:35.082 [2024-12-10 11:38:57.213868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.082 [2024-12-10 11:38:57.214029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:35.082 [2024-12-10 11:38:57.214080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.082 [2024-12-10 11:38:57.214118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:35.082 [2024-12-10 11:38:57.214155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:35.082 [2024-12-10 11:38:57.214191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.341 [2024-12-10 11:38:57.247821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.341 [2024-12-10 11:38:57.248044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:35.341 [2024-12-10 11:38:57.248204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.575 ms 00:30:35.341 [2024-12-10 11:38:57.248335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.341 [2024-12-10 11:38:57.248471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.341 [2024-12-10 11:38:57.248561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:35.341 [2024-12-10 11:38:57.248725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:35.341 [2024-12-10 11:38:57.248847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.341 [2024-12-10 11:38:57.253253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.050 ms, result 0 00:30:36.720  [2024-12-10T11:38:59.453Z] Copying: 912/1048576 [kB] (912 kBps) [2024-12-10T11:39:00.829Z] Copying: 4716/1048576 [kB] (3804 kBps) [2024-12-10T11:39:01.766Z] Copying: 26/1024 [MB] (22 MBps) [2024-12-10T11:39:02.729Z] Copying: 54/1024 [MB] (27 MBps) [2024-12-10T11:39:03.666Z] Copying: 83/1024 [MB] (28 MBps) [2024-12-10T11:39:04.603Z] Copying: 113/1024 [MB] (30 MBps) [2024-12-10T11:39:05.539Z] Copying: 141/1024 [MB] (28 MBps) [2024-12-10T11:39:06.475Z] Copying: 169/1024 [MB] (27 MBps) [2024-12-10T11:39:07.851Z] Copying: 197/1024 [MB] (27 MBps) [2024-12-10T11:39:08.786Z] Copying: 225/1024 [MB] (28 MBps) [2024-12-10T11:39:09.724Z] Copying: 253/1024 [MB] (27 MBps) [2024-12-10T11:39:10.661Z] Copying: 280/1024 [MB] (27 MBps) [2024-12-10T11:39:11.596Z] Copying: 308/1024 [MB] (27 MBps) [2024-12-10T11:39:12.533Z] Copying: 336/1024 [MB] (28 MBps) [2024-12-10T11:39:13.470Z] Copying: 363/1024 [MB] (27 MBps) [2024-12-10T11:39:14.849Z] Copying: 391/1024 [MB] (27 MBps) [2024-12-10T11:39:15.785Z] Copying: 418/1024 [MB] (26 MBps) [2024-12-10T11:39:16.722Z] Copying: 445/1024 [MB] (27 MBps) [2024-12-10T11:39:17.660Z] Copying: 473/1024 [MB] (27 MBps) [2024-12-10T11:39:18.597Z] Copying: 501/1024 [MB] (27 MBps) [2024-12-10T11:39:19.535Z] Copying: 528/1024 [MB] (26 MBps) [2024-12-10T11:39:20.473Z] Copying: 555/1024 [MB] (27 MBps) [2024-12-10T11:39:21.853Z] Copying: 582/1024 [MB] (27 MBps) [2024-12-10T11:39:22.802Z] Copying: 610/1024 [MB] (27 MBps) [2024-12-10T11:39:23.782Z] Copying: 637/1024 [MB] (27 MBps) [2024-12-10T11:39:24.717Z] Copying: 665/1024 [MB] (28 MBps) [2024-12-10T11:39:25.650Z] Copying: 693/1024 [MB] (27 MBps) [2024-12-10T11:39:26.586Z] Copying: 721/1024 [MB] (28 MBps) [2024-12-10T11:39:27.521Z] Copying: 749/1024 [MB] (28 MBps) [2024-12-10T11:39:28.517Z] Copying: 777/1024 [MB] (27 MBps) [2024-12-10T11:39:29.453Z] Copying: 805/1024 [MB] (27 MBps) [2024-12-10T11:39:30.830Z] Copying: 833/1024 [MB] (27 MBps) [2024-12-10T11:39:31.767Z] Copying: 860/1024 [MB] (27 MBps) [2024-12-10T11:39:32.703Z] Copying: 887/1024 [MB] (26 MBps) [2024-12-10T11:39:33.639Z] Copying: 915/1024 [MB] (28 MBps) [2024-12-10T11:39:34.575Z] Copying: 943/1024 [MB] (27 MBps) [2024-12-10T11:39:35.510Z] Copying: 970/1024 [MB] (27 MBps) [2024-12-10T11:39:36.445Z] Copying: 998/1024 [MB] (28 MBps) [2024-12-10T11:39:36.445Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-10 11:39:36.398040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.278 [2024-12-10 11:39:36.398128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:14.278 [2024-12-10 11:39:36.398161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:14.278 [2024-12-10 11:39:36.398175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.278 [2024-12-10 11:39:36.398213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:14.278 [2024-12-10 11:39:36.404574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.278 [2024-12-10 11:39:36.404625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:14.278 [2024-12-10 11:39:36.404666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.332 ms 00:31:14.278 [2024-12-10 11:39:36.404680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.278 [2024-12-10 11:39:36.405026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.278 [2024-12-10 11:39:36.405064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:14.278 [2024-12-10 11:39:36.405080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:31:14.278 [2024-12-10 11:39:36.405095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.278 [2024-12-10 11:39:36.416166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.278 [2024-12-10 11:39:36.416211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:14.278 [2024-12-10 11:39:36.416230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.045 ms 00:31:14.278 [2024-12-10 11:39:36.416241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.278 [2024-12-10 11:39:36.421755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.278 [2024-12-10 11:39:36.421783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:14.278 [2024-12-10 11:39:36.421818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.474 ms 00:31:14.278 [2024-12-10 11:39:36.421828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.448307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.448563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:14.538 [2024-12-10 11:39:36.448589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.432 ms 00:31:14.538 [2024-12-10 11:39:36.448600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.463723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.463775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:14.538 [2024-12-10 11:39:36.463807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.047 ms 00:31:14.538 [2024-12-10 11:39:36.463817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.465860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.466040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:14.538 [2024-12-10 11:39:36.466088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:31:14.538 [2024-12-10 11:39:36.466108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.492215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.492439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:14.538 [2024-12-10 11:39:36.492495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.081 ms 00:31:14.538 [2024-12-10 11:39:36.492506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.517946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.517981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:14.538 [2024-12-10 11:39:36.518011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.398 ms 00:31:14.538 [2024-12-10 11:39:36.518020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.542929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.542963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:14.538 [2024-12-10 11:39:36.542992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.868 ms 00:31:14.538 [2024-12-10 11:39:36.543001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.567961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.538 [2024-12-10 11:39:36.568176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:14.538 [2024-12-10 11:39:36.568202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.900 ms 00:31:14.538 [2024-12-10 11:39:36.568214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.538 [2024-12-10 11:39:36.568257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:14.538 [2024-12-10 11:39:36.568278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:14.538 [2024-12-10 11:39:36.568291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:14.538 [2024-12-10 11:39:36.568303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:14.538 [2024-12-10 11:39:36.568429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.568992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:14.539 [2024-12-10 11:39:36.569351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:14.539 [2024-12-10 11:39:36.569361] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a48a3d2f-9343-46ec-b5be-0bdd53e2eb48 00:31:14.539 [2024-12-10 11:39:36.569371] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:14.539 [2024-12-10 11:39:36.569379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:31:14.539 [2024-12-10 11:39:36.569392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:31:14.539 [2024-12-10 11:39:36.569402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:31:14.540 [2024-12-10 11:39:36.569410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:14.540 [2024-12-10 11:39:36.569429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:14.540 [2024-12-10 11:39:36.569438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:14.540 [2024-12-10 11:39:36.569446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:14.540 [2024-12-10 11:39:36.569454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:14.540 [2024-12-10 11:39:36.569465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.540 [2024-12-10 11:39:36.569474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:14.540 [2024-12-10 11:39:36.569483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:31:14.540 [2024-12-10 11:39:36.569492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.583164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.540 [2024-12-10 11:39:36.583337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:14.540 [2024-12-10 11:39:36.583361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.652 ms 00:31:14.540 [2024-12-10 11:39:36.583373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.583820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.540 [2024-12-10 11:39:36.583838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:14.540 [2024-12-10 11:39:36.583850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:31:14.540 [2024-12-10 11:39:36.583859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.618011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.540 [2024-12-10 11:39:36.618050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:14.540 [2024-12-10 11:39:36.618081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.540 [2024-12-10 11:39:36.618090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.618139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.540 [2024-12-10 11:39:36.618151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:14.540 [2024-12-10 11:39:36.618161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.540 [2024-12-10 11:39:36.618169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.618259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.540 [2024-12-10 11:39:36.618276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:14.540 [2024-12-10 11:39:36.618286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.540 [2024-12-10 11:39:36.618296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.540 [2024-12-10 11:39:36.618314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.540 [2024-12-10 11:39:36.618325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:14.540 [2024-12-10 11:39:36.618334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.540 [2024-12-10 11:39:36.618342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.798 [2024-12-10 11:39:36.709723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.798 [2024-12-10 11:39:36.709819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:14.798 [2024-12-10 11:39:36.709854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.709865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.801564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.801820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.799 [2024-12-10 11:39:36.801850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.801864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.801942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.801967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.799 [2024-12-10 11:39:36.801978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.801989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.802079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.799 [2024-12-10 11:39:36.802104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.802115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.802270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.799 [2024-12-10 11:39:36.802288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.802299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.802402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:14.799 [2024-12-10 11:39:36.802412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.802435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.802487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.799 [2024-12-10 11:39:36.802502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.802512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.799 [2024-12-10 11:39:36.802573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.799 [2024-12-10 11:39:36.802583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.799 [2024-12-10 11:39:36.802592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.799 [2024-12-10 11:39:36.802775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.711 ms, result 0 00:31:15.735 00:31:15.735 00:31:15.735 11:39:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:17.638 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:17.638 11:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.638 [2024-12-10 11:39:39.699312] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:17.638 [2024-12-10 11:39:39.699478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:31:17.897 [2024-12-10 11:39:39.884828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.897 [2024-12-10 11:39:40.015975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.156 [2024-12-10 11:39:40.316256] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.156 [2024-12-10 11:39:40.316341] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.415 [2024-12-10 11:39:40.474072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.474284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:18.415 [2024-12-10 11:39:40.474312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:18.415 [2024-12-10 11:39:40.474336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.474399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.474418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.415 [2024-12-10 11:39:40.474429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:18.415 [2024-12-10 11:39:40.474439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.474466] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:18.415 [2024-12-10 11:39:40.475283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:18.415 [2024-12-10 11:39:40.475306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.475316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.415 [2024-12-10 11:39:40.475327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:31:18.415 [2024-12-10 11:39:40.475336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.476385] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:18.415 [2024-12-10 11:39:40.489125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.489295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:18.415 [2024-12-10 11:39:40.489320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.756 ms 00:31:18.415 [2024-12-10 11:39:40.489331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.489449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.489467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:18.415 [2024-12-10 11:39:40.489482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:18.415 [2024-12-10 11:39:40.489497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.494113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.494151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.415 [2024-12-10 11:39:40.494181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:31:18.415 [2024-12-10 11:39:40.494201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.415 [2024-12-10 11:39:40.494333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.415 [2024-12-10 11:39:40.494352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.415 [2024-12-10 11:39:40.494363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:18.415 [2024-12-10 11:39:40.494372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.494442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.416 [2024-12-10 11:39:40.494457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:18.416 [2024-12-10 11:39:40.494468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:18.416 [2024-12-10 11:39:40.494477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.494513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:18.416 [2024-12-10 11:39:40.498674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.416 [2024-12-10 11:39:40.498718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.416 [2024-12-10 11:39:40.498768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:31:18.416 [2024-12-10 11:39:40.498778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.498813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.416 [2024-12-10 11:39:40.498826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:18.416 [2024-12-10 11:39:40.498837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:18.416 [2024-12-10 11:39:40.498846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.498884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:18.416 [2024-12-10 11:39:40.498910] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:18.416 [2024-12-10 11:39:40.498946] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:18.416 [2024-12-10 11:39:40.498966] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:18.416 [2024-12-10 11:39:40.499070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:18.416 [2024-12-10 11:39:40.499082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:18.416 [2024-12-10 11:39:40.499094] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:18.416 [2024-12-10 11:39:40.499105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499115] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:18.416 [2024-12-10 11:39:40.499134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:18.416 [2024-12-10 11:39:40.499145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:18.416 [2024-12-10 11:39:40.499154] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:18.416 [2024-12-10 11:39:40.499163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.416 [2024-12-10 11:39:40.499172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:18.416 [2024-12-10 11:39:40.499181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:31:18.416 [2024-12-10 11:39:40.499190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.499261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.416 [2024-12-10 11:39:40.499273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:18.416 [2024-12-10 11:39:40.499282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:18.416 [2024-12-10 11:39:40.499291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.416 [2024-12-10 11:39:40.499381] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:18.416 [2024-12-10 11:39:40.499395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:18.416 [2024-12-10 11:39:40.499404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:18.416 [2024-12-10 11:39:40.499431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:18.416 [2024-12-10 11:39:40.499457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.416 [2024-12-10 11:39:40.499473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:18.416 [2024-12-10 11:39:40.499481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:18.416 [2024-12-10 11:39:40.499489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.416 [2024-12-10 11:39:40.499510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:18.416 [2024-12-10 11:39:40.499519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:18.416 [2024-12-10 11:39:40.499527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:18.416 [2024-12-10 11:39:40.499544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:18.416 [2024-12-10 11:39:40.499568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:18.416 [2024-12-10 11:39:40.499592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:18.416 [2024-12-10 11:39:40.499616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:18.416 [2024-12-10 11:39:40.499656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:18.416 [2024-12-10 11:39:40.499698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.416 [2024-12-10 11:39:40.499768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:18.416 [2024-12-10 11:39:40.499780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:18.416 [2024-12-10 11:39:40.499789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.416 [2024-12-10 11:39:40.499798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:18.416 [2024-12-10 11:39:40.499807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:18.416 [2024-12-10 11:39:40.499820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:18.416 [2024-12-10 11:39:40.499848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:18.416 [2024-12-10 11:39:40.499857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:18.416 [2024-12-10 11:39:40.499876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:18.416 [2024-12-10 11:39:40.499886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.416 [2024-12-10 11:39:40.499905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:18.416 [2024-12-10 11:39:40.499918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:18.416 [2024-12-10 11:39:40.499933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:18.416 [2024-12-10 11:39:40.499950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:18.416 [2024-12-10 11:39:40.499961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:18.416 [2024-12-10 11:39:40.499971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:18.416 [2024-12-10 11:39:40.499981] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:18.416 [2024-12-10 11:39:40.500024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.416 [2024-12-10 11:39:40.500042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:18.416 [2024-12-10 11:39:40.500053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:18.416 [2024-12-10 11:39:40.500063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:18.416 [2024-12-10 11:39:40.500102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:18.416 [2024-12-10 11:39:40.500131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:18.416 [2024-12-10 11:39:40.500142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:18.416 [2024-12-10 11:39:40.500153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:18.416 [2024-12-10 11:39:40.500164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:18.416 [2024-12-10 11:39:40.500176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:18.416 [2024-12-10 11:39:40.500201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:18.416 [2024-12-10 11:39:40.500212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:18.416 [2024-12-10 11:39:40.500223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:18.416 [2024-12-10 11:39:40.500233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:18.417 [2024-12-10 11:39:40.500244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:18.417 [2024-12-10 11:39:40.500255] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:18.417 [2024-12-10 11:39:40.500267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.417 [2024-12-10 11:39:40.500280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.417 [2024-12-10 11:39:40.500291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:18.417 [2024-12-10 11:39:40.500304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:18.417 [2024-12-10 11:39:40.500315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:18.417 [2024-12-10 11:39:40.500327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.500338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:18.417 [2024-12-10 11:39:40.500351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:31:18.417 [2024-12-10 11:39:40.500362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.527365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.527416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.417 [2024-12-10 11:39:40.527433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.925 ms 00:31:18.417 [2024-12-10 11:39:40.527447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.527538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.527551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:18.417 [2024-12-10 11:39:40.527561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:18.417 [2024-12-10 11:39:40.527569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.574311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.574356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.417 [2024-12-10 11:39:40.574371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.615 ms 00:31:18.417 [2024-12-10 11:39:40.574381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.574426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.574440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.417 [2024-12-10 11:39:40.574456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:18.417 [2024-12-10 11:39:40.574464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.574876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.574895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.417 [2024-12-10 11:39:40.574906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:31:18.417 [2024-12-10 11:39:40.574915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.417 [2024-12-10 11:39:40.575069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.417 [2024-12-10 11:39:40.575101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.417 [2024-12-10 11:39:40.575129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:31:18.417 [2024-12-10 11:39:40.575139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.676 [2024-12-10 11:39:40.589896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.676 [2024-12-10 11:39:40.590056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.676 [2024-12-10 11:39:40.590082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.733 ms 00:31:18.677 [2024-12-10 11:39:40.590093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.605288] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:18.677 [2024-12-10 11:39:40.605347] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:18.677 [2024-12-10 11:39:40.605396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.605423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:18.677 [2024-12-10 11:39:40.605435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.191 ms 00:31:18.677 [2024-12-10 11:39:40.605446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.630136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.630189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:18.677 [2024-12-10 11:39:40.630221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.617 ms 00:31:18.677 [2024-12-10 11:39:40.630231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.643563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.643599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:18.677 [2024-12-10 11:39:40.643629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.270 ms 00:31:18.677 [2024-12-10 11:39:40.643638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.656313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.656539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:18.677 [2024-12-10 11:39:40.656563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:31:18.677 [2024-12-10 11:39:40.656574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.657321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.657361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:18.677 [2024-12-10 11:39:40.657393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:31:18.677 [2024-12-10 11:39:40.657403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.722835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.722890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:18.677 [2024-12-10 11:39:40.722930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.404 ms 00:31:18.677 [2024-12-10 11:39:40.722941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.733775] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:18.677 [2024-12-10 11:39:40.735933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.735965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:18.677 [2024-12-10 11:39:40.735996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.930 ms 00:31:18.677 [2024-12-10 11:39:40.736021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.736160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.736180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:18.677 [2024-12-10 11:39:40.736197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:18.677 [2024-12-10 11:39:40.736208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.736911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.736936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:18.677 [2024-12-10 11:39:40.736949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:31:18.677 [2024-12-10 11:39:40.736959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.736991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.737004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:18.677 [2024-12-10 11:39:40.737030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:18.677 [2024-12-10 11:39:40.737040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.737081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:18.677 [2024-12-10 11:39:40.737096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.737106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:18.677 [2024-12-10 11:39:40.737116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:18.677 [2024-12-10 11:39:40.737125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.763891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.764099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:18.677 [2024-12-10 11:39:40.764135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.743 ms 00:31:18.677 [2024-12-10 11:39:40.764148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.764226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.677 [2024-12-10 11:39:40.764244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:18.677 [2024-12-10 11:39:40.764257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:18.677 [2024-12-10 11:39:40.764268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.677 [2024-12-10 11:39:40.765555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.896 ms, result 0 00:31:20.079  [2024-12-10T11:39:43.182Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-10T11:39:44.118Z] Copying: 46/1024 [MB] (22 MBps) [2024-12-10T11:39:45.056Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-10T11:39:45.992Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-10T11:39:47.369Z] Copying: 114/1024 [MB] (22 MBps) [2024-12-10T11:39:48.305Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-10T11:39:49.243Z] Copying: 159/1024 [MB] (22 MBps) [2024-12-10T11:39:50.220Z] Copying: 182/1024 [MB] (22 MBps) [2024-12-10T11:39:51.157Z] Copying: 205/1024 [MB] (23 MBps) [2024-12-10T11:39:52.094Z] Copying: 227/1024 [MB] (22 MBps) [2024-12-10T11:39:53.032Z] Copying: 249/1024 [MB] (22 MBps) [2024-12-10T11:39:53.969Z] Copying: 271/1024 [MB] (22 MBps) [2024-12-10T11:39:55.347Z] Copying: 294/1024 [MB] (22 MBps) [2024-12-10T11:39:56.283Z] Copying: 317/1024 [MB] (23 MBps) [2024-12-10T11:39:57.220Z] Copying: 340/1024 [MB] (23 MBps) [2024-12-10T11:39:58.157Z] Copying: 363/1024 [MB] (22 MBps) [2024-12-10T11:39:59.094Z] Copying: 386/1024 [MB] (23 MBps) [2024-12-10T11:40:00.031Z] Copying: 409/1024 [MB] (22 MBps) [2024-12-10T11:40:00.976Z] Copying: 432/1024 [MB] (23 MBps) [2024-12-10T11:40:02.355Z] Copying: 456/1024 [MB] (23 MBps) [2024-12-10T11:40:03.293Z] Copying: 479/1024 [MB] (23 MBps) [2024-12-10T11:40:04.228Z] Copying: 502/1024 [MB] (22 MBps) [2024-12-10T11:40:05.164Z] Copying: 525/1024 [MB] (23 MBps) [2024-12-10T11:40:06.101Z] Copying: 548/1024 [MB] (23 MBps) [2024-12-10T11:40:07.038Z] Copying: 571/1024 [MB] (23 MBps) [2024-12-10T11:40:07.975Z] Copying: 594/1024 [MB] (22 MBps) [2024-12-10T11:40:09.354Z] Copying: 617/1024 [MB] (22 MBps) [2024-12-10T11:40:10.292Z] Copying: 640/1024 [MB] (22 MBps) [2024-12-10T11:40:11.229Z] Copying: 662/1024 [MB] (22 MBps) [2024-12-10T11:40:12.167Z] Copying: 685/1024 [MB] (22 MBps) [2024-12-10T11:40:13.105Z] Copying: 707/1024 [MB] (22 MBps) [2024-12-10T11:40:14.042Z] Copying: 730/1024 [MB] (22 MBps) [2024-12-10T11:40:14.979Z] Copying: 753/1024 [MB] (22 MBps) [2024-12-10T11:40:16.366Z] Copying: 776/1024 [MB] (22 MBps) [2024-12-10T11:40:17.303Z] Copying: 798/1024 [MB] (22 MBps) [2024-12-10T11:40:18.240Z] Copying: 821/1024 [MB] (22 MBps) [2024-12-10T11:40:19.177Z] Copying: 844/1024 [MB] (22 MBps) [2024-12-10T11:40:20.114Z] Copying: 867/1024 [MB] (23 MBps) [2024-12-10T11:40:21.051Z] Copying: 889/1024 [MB] (22 MBps) [2024-12-10T11:40:21.989Z] Copying: 912/1024 [MB] (22 MBps) [2024-12-10T11:40:23.368Z] Copying: 935/1024 [MB] (22 MBps) [2024-12-10T11:40:24.304Z] Copying: 958/1024 [MB] (22 MBps) [2024-12-10T11:40:25.241Z] Copying: 980/1024 [MB] (22 MBps) [2024-12-10T11:40:26.179Z] Copying: 1003/1024 [MB] (22 MBps) [2024-12-10T11:40:26.179Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 11:40:25.995322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:25.995408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.012 [2024-12-10 11:40:25.995435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.012 [2024-12-10 11:40:25.995450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:25.995490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.012 [2024-12-10 11:40:26.000256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.000471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.012 [2024-12-10 11:40:26.000505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.738 ms 00:32:04.012 [2024-12-10 11:40:26.000521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.000857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.000883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.012 [2024-12-10 11:40:26.000899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:04.012 [2024-12-10 11:40:26.000914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.005048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.005078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:04.012 [2024-12-10 11:40:26.005090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:32:04.012 [2024-12-10 11:40:26.005104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.010373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.010511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:04.012 [2024-12-10 11:40:26.010534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.250 ms 00:32:04.012 [2024-12-10 11:40:26.010546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.034835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.034872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:04.012 [2024-12-10 11:40:26.034903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.233 ms 00:32:04.012 [2024-12-10 11:40:26.034912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.049557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.049593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:04.012 [2024-12-10 11:40:26.049608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.621 ms 00:32:04.012 [2024-12-10 11:40:26.049618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.051390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.051539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:04.012 [2024-12-10 11:40:26.051564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:32:04.012 [2024-12-10 11:40:26.051575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.076547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.076583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:04.012 [2024-12-10 11:40:26.076597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.948 ms 00:32:04.012 [2024-12-10 11:40:26.076606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.100775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.100811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:04.012 [2024-12-10 11:40:26.100840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.146 ms 00:32:04.012 [2024-12-10 11:40:26.100849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.124517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.124553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:04.012 [2024-12-10 11:40:26.124567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:32:04.012 [2024-12-10 11:40:26.124576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.148386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.012 [2024-12-10 11:40:26.148422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:04.012 [2024-12-10 11:40:26.148435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:32:04.012 [2024-12-10 11:40:26.148444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.012 [2024-12-10 11:40:26.148465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.012 [2024-12-10 11:40:26.148488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:04.012 [2024-12-10 11:40:26.148502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:04.012 [2024-12-10 11:40:26.148511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.012 [2024-12-10 11:40:26.148795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.148997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.013 [2024-12-10 11:40:26.149650] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.013 [2024-12-10 11:40:26.149661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a48a3d2f-9343-46ec-b5be-0bdd53e2eb48 00:32:04.013 [2024-12-10 11:40:26.149671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:04.013 [2024-12-10 11:40:26.149680] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:04.013 [2024-12-10 11:40:26.149690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.013 [2024-12-10 11:40:26.149700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.013 [2024-12-10 11:40:26.149732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.013 [2024-12-10 11:40:26.149744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.013 [2024-12-10 11:40:26.149753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.013 [2024-12-10 11:40:26.149762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.013 [2024-12-10 11:40:26.149771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.013 [2024-12-10 11:40:26.149782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.013 [2024-12-10 11:40:26.149792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.013 [2024-12-10 11:40:26.149804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:32:04.013 [2024-12-10 11:40:26.149817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.013 [2024-12-10 11:40:26.162980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.013 [2024-12-10 11:40:26.163013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.013 [2024-12-10 11:40:26.163027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.141 ms 00:32:04.013 [2024-12-10 11:40:26.163037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.014 [2024-12-10 11:40:26.163383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.014 [2024-12-10 11:40:26.163440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.014 [2024-12-10 11:40:26.163452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:04.014 [2024-12-10 11:40:26.163461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.197972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.198009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.273 [2024-12-10 11:40:26.198055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.198065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.198115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.198132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.273 [2024-12-10 11:40:26.198142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.198151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.198213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.198229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.273 [2024-12-10 11:40:26.198239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.198248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.198266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.198277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.273 [2024-12-10 11:40:26.198292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.198301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.276007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.276064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.273 [2024-12-10 11:40:26.276079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.276095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.340942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.340997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.273 [2024-12-10 11:40:26.341028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.273 [2024-12-10 11:40:26.341174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.273 [2024-12-10 11:40:26.341244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.273 [2024-12-10 11:40:26.341386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.273 [2024-12-10 11:40:26.341460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.273 [2024-12-10 11:40:26.341536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.341588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.273 [2024-12-10 11:40:26.341602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.273 [2024-12-10 11:40:26.341611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.273 [2024-12-10 11:40:26.341625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.273 [2024-12-10 11:40:26.342019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.686 ms, result 0 00:32:05.210 00:32:05.210 00:32:05.210 11:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:07.114 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:07.114 11:40:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:07.114 11:40:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:07.114 11:40:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:07.114 11:40:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:07.114 Process with pid 81317 is not found 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81317 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81317 ']' 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81317 00:32:07.114 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81317) - No such process 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81317 is not found' 00:32:07.114 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:07.373 Remove shared memory files 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:07.373 00:32:07.373 real 4m0.747s 00:32:07.373 user 4m42.082s 00:32:07.373 sys 0m35.298s 00:32:07.373 ************************************ 00:32:07.373 END TEST ftl_dirty_shutdown 00:32:07.373 ************************************ 00:32:07.373 11:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:07.374 11:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:07.374 11:40:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:07.374 11:40:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:07.374 11:40:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:07.374 11:40:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:07.633 ************************************ 00:32:07.633 START TEST ftl_upgrade_shutdown 00:32:07.633 ************************************ 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:07.633 * Looking for test storage... 00:32:07.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.633 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:07.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.634 --rc genhtml_branch_coverage=1 00:32:07.634 --rc genhtml_function_coverage=1 00:32:07.634 --rc genhtml_legend=1 00:32:07.634 --rc geninfo_all_blocks=1 00:32:07.634 --rc geninfo_unexecuted_blocks=1 00:32:07.634 00:32:07.634 ' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:07.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.634 --rc genhtml_branch_coverage=1 00:32:07.634 --rc genhtml_function_coverage=1 00:32:07.634 --rc genhtml_legend=1 00:32:07.634 --rc geninfo_all_blocks=1 00:32:07.634 --rc geninfo_unexecuted_blocks=1 00:32:07.634 00:32:07.634 ' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:07.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.634 --rc genhtml_branch_coverage=1 00:32:07.634 --rc genhtml_function_coverage=1 00:32:07.634 --rc genhtml_legend=1 00:32:07.634 --rc geninfo_all_blocks=1 00:32:07.634 --rc geninfo_unexecuted_blocks=1 00:32:07.634 00:32:07.634 ' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:07.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.634 --rc genhtml_branch_coverage=1 00:32:07.634 --rc genhtml_function_coverage=1 00:32:07.634 --rc genhtml_legend=1 00:32:07.634 --rc geninfo_all_blocks=1 00:32:07.634 --rc geninfo_unexecuted_blocks=1 00:32:07.634 00:32:07.634 ' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83795 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83795 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83795 ']' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.634 11:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:07.894 [2024-12-10 11:40:29.853887] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:07.894 [2024-12-10 11:40:29.854063] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83795 ] 00:32:07.894 [2024-12-10 11:40:30.037799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.152 [2024-12-10 11:40:30.161438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:08.721 11:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:08.980 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:09.239 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:09.239 { 00:32:09.239 "name": "basen1", 00:32:09.239 "aliases": [ 00:32:09.239 "88888737-19c5-4b42-a350-a46d2f4b909d" 00:32:09.239 ], 00:32:09.239 "product_name": "NVMe disk", 00:32:09.239 "block_size": 4096, 00:32:09.239 "num_blocks": 1310720, 00:32:09.239 "uuid": "88888737-19c5-4b42-a350-a46d2f4b909d", 00:32:09.239 "numa_id": -1, 00:32:09.239 "assigned_rate_limits": { 00:32:09.239 "rw_ios_per_sec": 0, 00:32:09.239 "rw_mbytes_per_sec": 0, 00:32:09.239 "r_mbytes_per_sec": 0, 00:32:09.239 "w_mbytes_per_sec": 0 00:32:09.239 }, 00:32:09.239 "claimed": true, 00:32:09.239 "claim_type": "read_many_write_one", 00:32:09.239 "zoned": false, 00:32:09.239 "supported_io_types": { 00:32:09.239 "read": true, 00:32:09.239 "write": true, 00:32:09.239 "unmap": true, 00:32:09.239 "flush": true, 00:32:09.239 "reset": true, 00:32:09.240 "nvme_admin": true, 00:32:09.240 "nvme_io": true, 00:32:09.240 "nvme_io_md": false, 00:32:09.240 "write_zeroes": true, 00:32:09.240 "zcopy": false, 00:32:09.240 "get_zone_info": false, 00:32:09.240 "zone_management": false, 00:32:09.240 "zone_append": false, 00:32:09.240 "compare": true, 00:32:09.240 "compare_and_write": false, 00:32:09.240 "abort": true, 00:32:09.240 "seek_hole": false, 00:32:09.240 "seek_data": false, 00:32:09.240 "copy": true, 00:32:09.240 "nvme_iov_md": false 00:32:09.240 }, 00:32:09.240 "driver_specific": { 00:32:09.240 "nvme": [ 00:32:09.240 { 00:32:09.240 "pci_address": "0000:00:11.0", 00:32:09.240 "trid": { 00:32:09.240 "trtype": "PCIe", 00:32:09.240 "traddr": "0000:00:11.0" 00:32:09.240 }, 00:32:09.240 "ctrlr_data": { 00:32:09.240 "cntlid": 0, 00:32:09.240 "vendor_id": "0x1b36", 00:32:09.240 "model_number": "QEMU NVMe Ctrl", 00:32:09.240 "serial_number": "12341", 00:32:09.240 "firmware_revision": "8.0.0", 00:32:09.240 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:09.240 "oacs": { 00:32:09.240 "security": 0, 00:32:09.240 "format": 1, 00:32:09.240 "firmware": 0, 00:32:09.240 "ns_manage": 1 00:32:09.240 }, 00:32:09.240 "multi_ctrlr": false, 00:32:09.240 "ana_reporting": false 00:32:09.240 }, 00:32:09.240 "vs": { 00:32:09.240 "nvme_version": "1.4" 00:32:09.240 }, 00:32:09.240 "ns_data": { 00:32:09.240 "id": 1, 00:32:09.240 "can_share": false 00:32:09.240 } 00:32:09.240 } 00:32:09.240 ], 00:32:09.240 "mp_policy": "active_passive" 00:32:09.240 } 00:32:09.240 } 00:32:09.240 ]' 00:32:09.240 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:09.240 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:09.240 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:09.499 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:09.758 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=faa5eff3-2ebf-4896-8525-94373363f2ae 00:32:09.758 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:09.758 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u faa5eff3-2ebf-4896-8525-94373363f2ae 00:32:10.016 11:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:10.276 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=9deaed7a-3edf-4d08-924c-5878d0de9987 00:32:10.276 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 9deaed7a-3edf-4d08-924c-5878d0de9987 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d205ddfa-a9cc-4486-9079-544881d5a530 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d205ddfa-a9cc-4486-9079-544881d5a530 ]] 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d205ddfa-a9cc-4486-9079-544881d5a530 5120 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d205ddfa-a9cc-4486-9079-544881d5a530 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d205ddfa-a9cc-4486-9079-544881d5a530 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d205ddfa-a9cc-4486-9079-544881d5a530 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:10.535 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d205ddfa-a9cc-4486-9079-544881d5a530 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:10.794 { 00:32:10.794 "name": "d205ddfa-a9cc-4486-9079-544881d5a530", 00:32:10.794 "aliases": [ 00:32:10.794 "lvs/basen1p0" 00:32:10.794 ], 00:32:10.794 "product_name": "Logical Volume", 00:32:10.794 "block_size": 4096, 00:32:10.794 "num_blocks": 5242880, 00:32:10.794 "uuid": "d205ddfa-a9cc-4486-9079-544881d5a530", 00:32:10.794 "assigned_rate_limits": { 00:32:10.794 "rw_ios_per_sec": 0, 00:32:10.794 "rw_mbytes_per_sec": 0, 00:32:10.794 "r_mbytes_per_sec": 0, 00:32:10.794 "w_mbytes_per_sec": 0 00:32:10.794 }, 00:32:10.794 "claimed": false, 00:32:10.794 "zoned": false, 00:32:10.794 "supported_io_types": { 00:32:10.794 "read": true, 00:32:10.794 "write": true, 00:32:10.794 "unmap": true, 00:32:10.794 "flush": false, 00:32:10.794 "reset": true, 00:32:10.794 "nvme_admin": false, 00:32:10.794 "nvme_io": false, 00:32:10.794 "nvme_io_md": false, 00:32:10.794 "write_zeroes": true, 00:32:10.794 "zcopy": false, 00:32:10.794 "get_zone_info": false, 00:32:10.794 "zone_management": false, 00:32:10.794 "zone_append": false, 00:32:10.794 "compare": false, 00:32:10.794 "compare_and_write": false, 00:32:10.794 "abort": false, 00:32:10.794 "seek_hole": true, 00:32:10.794 "seek_data": true, 00:32:10.794 "copy": false, 00:32:10.794 "nvme_iov_md": false 00:32:10.794 }, 00:32:10.794 "driver_specific": { 00:32:10.794 "lvol": { 00:32:10.794 "lvol_store_uuid": "9deaed7a-3edf-4d08-924c-5878d0de9987", 00:32:10.794 "base_bdev": "basen1", 00:32:10.794 "thin_provision": true, 00:32:10.794 "num_allocated_clusters": 0, 00:32:10.794 "snapshot": false, 00:32:10.794 "clone": false, 00:32:10.794 "esnap_clone": false 00:32:10.794 } 00:32:10.794 } 00:32:10.794 } 00:32:10.794 ]' 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:10.794 11:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:11.362 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:11.362 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:11.363 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:11.622 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:11.622 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:11.622 11:40:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d205ddfa-a9cc-4486-9079-544881d5a530 -c cachen1p0 --l2p_dram_limit 2 00:32:11.622 [2024-12-10 11:40:33.732596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.622 [2024-12-10 11:40:33.732834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:11.622 [2024-12-10 11:40:33.732963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:11.622 [2024-12-10 11:40:33.733113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.622 [2024-12-10 11:40:33.733254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.622 [2024-12-10 11:40:33.733353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:11.622 [2024-12-10 11:40:33.733472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:32:11.622 [2024-12-10 11:40:33.733518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.622 [2024-12-10 11:40:33.733580] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:11.622 [2024-12-10 11:40:33.734498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:11.622 [2024-12-10 11:40:33.734550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.622 [2024-12-10 11:40:33.734563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:11.622 [2024-12-10 11:40:33.734577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.961 ms 00:32:11.623 [2024-12-10 11:40:33.734588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.734704] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 331a79b2-e7ae-4e9f-9886-4cb6065a80ab 00:32:11.623 [2024-12-10 11:40:33.735650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.735690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:11.623 [2024-12-10 11:40:33.735706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:11.623 [2024-12-10 11:40:33.735734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.739695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.739739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:11.623 [2024-12-10 11:40:33.739754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.913 ms 00:32:11.623 [2024-12-10 11:40:33.739765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.739830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.739849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:11.623 [2024-12-10 11:40:33.739860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:11.623 [2024-12-10 11:40:33.739874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.739937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.739958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:11.623 [2024-12-10 11:40:33.739971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:11.623 [2024-12-10 11:40:33.739982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.740010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:11.623 [2024-12-10 11:40:33.744035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.744255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:11.623 [2024-12-10 11:40:33.744381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.030 ms 00:32:11.623 [2024-12-10 11:40:33.744558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.744652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.744788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:11.623 [2024-12-10 11:40:33.744901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:11.623 [2024-12-10 11:40:33.744950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.745047] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:11.623 [2024-12-10 11:40:33.745224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:11.623 [2024-12-10 11:40:33.745431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:11.623 [2024-12-10 11:40:33.745564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:11.623 [2024-12-10 11:40:33.745594] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:11.623 [2024-12-10 11:40:33.745607] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:11.623 [2024-12-10 11:40:33.745621] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:11.623 [2024-12-10 11:40:33.745680] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:11.623 [2024-12-10 11:40:33.745700] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:11.623 [2024-12-10 11:40:33.745711] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:11.623 [2024-12-10 11:40:33.745726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.745737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:11.623 [2024-12-10 11:40:33.745751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:32:11.623 [2024-12-10 11:40:33.745762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.745871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.623 [2024-12-10 11:40:33.745896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:11.623 [2024-12-10 11:40:33.745911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:32:11.623 [2024-12-10 11:40:33.745922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.623 [2024-12-10 11:40:33.746023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:11.623 [2024-12-10 11:40:33.746066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:11.623 [2024-12-10 11:40:33.746079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:11.623 [2024-12-10 11:40:33.746110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:11.623 [2024-12-10 11:40:33.746130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:11.623 [2024-12-10 11:40:33.746141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:11.623 [2024-12-10 11:40:33.746151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:11.623 [2024-12-10 11:40:33.746171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:11.623 [2024-12-10 11:40:33.746183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:11.623 [2024-12-10 11:40:33.746204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:11.623 [2024-12-10 11:40:33.746213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:11.623 [2024-12-10 11:40:33.746235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:11.623 [2024-12-10 11:40:33.746246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:11.623 [2024-12-10 11:40:33.746267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:11.623 [2024-12-10 11:40:33.746297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:11.623 [2024-12-10 11:40:33.746328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:11.623 [2024-12-10 11:40:33.746359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:11.623 [2024-12-10 11:40:33.746392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:11.623 [2024-12-10 11:40:33.746422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:11.623 [2024-12-10 11:40:33.746455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:11.623 [2024-12-10 11:40:33.746484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:11.623 [2024-12-10 11:40:33.746495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746504] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:11.623 [2024-12-10 11:40:33.746516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:11.623 [2024-12-10 11:40:33.746526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.623 [2024-12-10 11:40:33.746558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:11.623 [2024-12-10 11:40:33.746573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:11.623 [2024-12-10 11:40:33.746583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:11.623 [2024-12-10 11:40:33.746594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:11.623 [2024-12-10 11:40:33.746603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:11.623 [2024-12-10 11:40:33.746614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:11.623 [2024-12-10 11:40:33.746626] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:11.623 [2024-12-10 11:40:33.746658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.623 [2024-12-10 11:40:33.746670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:11.623 [2024-12-10 11:40:33.746682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:11.623 [2024-12-10 11:40:33.746693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:11.623 [2024-12-10 11:40:33.746704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:11.623 [2024-12-10 11:40:33.746714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:11.623 [2024-12-10 11:40:33.746727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:11.624 [2024-12-10 11:40:33.746737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:11.624 [2024-12-10 11:40:33.746749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:11.624 [2024-12-10 11:40:33.746828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:11.624 [2024-12-10 11:40:33.746841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:11.624 [2024-12-10 11:40:33.746864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:11.624 [2024-12-10 11:40:33.746874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:11.624 [2024-12-10 11:40:33.746886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:11.624 [2024-12-10 11:40:33.746897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.624 [2024-12-10 11:40:33.746909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:11.624 [2024-12-10 11:40:33.746920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.939 ms 00:32:11.624 [2024-12-10 11:40:33.746932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.624 [2024-12-10 11:40:33.746977] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:11.624 [2024-12-10 11:40:33.746996] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:14.157 [2024-12-10 11:40:36.042710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.042967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:14.157 [2024-12-10 11:40:36.043098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2295.747 ms 00:32:14.157 [2024-12-10 11:40:36.043153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.069704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.069958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:14.157 [2024-12-10 11:40:36.070103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.204 ms 00:32:14.157 [2024-12-10 11:40:36.070156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.070385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.070539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:14.157 [2024-12-10 11:40:36.070692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:14.157 [2024-12-10 11:40:36.070757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.103789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.104008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:14.157 [2024-12-10 11:40:36.104164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.872 ms 00:32:14.157 [2024-12-10 11:40:36.104320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.104405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.104482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:14.157 [2024-12-10 11:40:36.104581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:14.157 [2024-12-10 11:40:36.104649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.105227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.105403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:14.157 [2024-12-10 11:40:36.105532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:32:14.157 [2024-12-10 11:40:36.105584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.105743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.105852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:14.157 [2024-12-10 11:40:36.105981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:14.157 [2024-12-10 11:40:36.106052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.121447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.121660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:14.157 [2024-12-10 11:40:36.121781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.249 ms 00:32:14.157 [2024-12-10 11:40:36.121833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.142556] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:14.157 [2024-12-10 11:40:36.143604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.143774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:14.157 [2024-12-10 11:40:36.143951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.585 ms 00:32:14.157 [2024-12-10 11:40:36.143976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.168231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.168449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:14.157 [2024-12-10 11:40:36.168592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.211 ms 00:32:14.157 [2024-12-10 11:40:36.168735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.157 [2024-12-10 11:40:36.168881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.157 [2024-12-10 11:40:36.169009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:14.158 [2024-12-10 11:40:36.169148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:14.158 [2024-12-10 11:40:36.169200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.158 [2024-12-10 11:40:36.198429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.158 [2024-12-10 11:40:36.198623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:14.158 [2024-12-10 11:40:36.198689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.132 ms 00:32:14.158 [2024-12-10 11:40:36.198706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.158 [2024-12-10 11:40:36.224968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.158 [2024-12-10 11:40:36.225003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:14.158 [2024-12-10 11:40:36.225020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.203 ms 00:32:14.158 [2024-12-10 11:40:36.225030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.158 [2024-12-10 11:40:36.225589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.158 [2024-12-10 11:40:36.225610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:14.158 [2024-12-10 11:40:36.225625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:32:14.158 [2024-12-10 11:40:36.225664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.158 [2024-12-10 11:40:36.296715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.158 [2024-12-10 11:40:36.296762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:14.158 [2024-12-10 11:40:36.296784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.972 ms 00:32:14.158 [2024-12-10 11:40:36.296794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.158 [2024-12-10 11:40:36.322205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.158 [2024-12-10 11:40:36.322243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:14.158 [2024-12-10 11:40:36.322283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.304 ms 00:32:14.158 [2024-12-10 11:40:36.322309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.417 [2024-12-10 11:40:36.355571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.417 [2024-12-10 11:40:36.355608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:14.417 [2024-12-10 11:40:36.355650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.191 ms 00:32:14.417 [2024-12-10 11:40:36.355663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.417 [2024-12-10 11:40:36.380239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.417 [2024-12-10 11:40:36.380275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:14.417 [2024-12-10 11:40:36.380309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.532 ms 00:32:14.417 [2024-12-10 11:40:36.380320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.417 [2024-12-10 11:40:36.380369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.417 [2024-12-10 11:40:36.380387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:14.417 [2024-12-10 11:40:36.380402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:14.417 [2024-12-10 11:40:36.380427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.417 [2024-12-10 11:40:36.380510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.417 [2024-12-10 11:40:36.380527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:14.417 [2024-12-10 11:40:36.380540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:14.417 [2024-12-10 11:40:36.380550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.417 [2024-12-10 11:40:36.381644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2648.489 ms, result 0 00:32:14.417 { 00:32:14.417 "name": "ftl", 00:32:14.417 "uuid": "331a79b2-e7ae-4e9f-9886-4cb6065a80ab" 00:32:14.417 } 00:32:14.417 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:14.676 [2024-12-10 11:40:36.696931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.676 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:14.936 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:15.195 [2024-12-10 11:40:37.261501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:15.195 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:15.454 [2024-12-10 11:40:37.494813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:15.454 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:15.713 Fill FTL, iteration 1 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:15.713 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83912 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83912 /var/tmp/spdk.tgt.sock 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83912 ']' 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.714 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 [2024-12-10 11:40:37.981127] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:15.973 [2024-12-10 11:40:37.982158] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83912 ] 00:32:16.232 [2024-12-10 11:40:38.163881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.232 [2024-12-10 11:40:38.288941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.800 11:40:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.800 11:40:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:16.800 11:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:17.368 ftln1 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83912 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83912 ']' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83912 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83912 00:32:17.369 killing process with pid 83912 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83912' 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83912 00:32:17.369 11:40:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83912 00:32:19.274 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:19.274 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:19.274 [2024-12-10 11:40:41.227623] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:19.274 [2024-12-10 11:40:41.227812] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83954 ] 00:32:19.274 [2024-12-10 11:40:41.407487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.533 [2024-12-10 11:40:41.493469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.911  [2024-12-10T11:40:44.014Z] Copying: 213/1024 [MB] (213 MBps) [2024-12-10T11:40:44.952Z] Copying: 429/1024 [MB] (216 MBps) [2024-12-10T11:40:45.891Z] Copying: 646/1024 [MB] (217 MBps) [2024-12-10T11:40:46.828Z] Copying: 863/1024 [MB] (217 MBps) [2024-12-10T11:40:47.777Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:32:25.610 00:32:25.610 Calculate MD5 checksum, iteration 1 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:25.610 11:40:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:25.610 [2024-12-10 11:40:47.515893] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:25.610 [2024-12-10 11:40:47.516295] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84024 ] 00:32:25.610 [2024-12-10 11:40:47.673730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.610 [2024-12-10 11:40:47.758665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.024  [2024-12-10T11:40:50.127Z] Copying: 466/1024 [MB] (466 MBps) [2024-12-10T11:40:50.386Z] Copying: 927/1024 [MB] (461 MBps) [2024-12-10T11:40:51.322Z] Copying: 1024/1024 [MB] (average 464 MBps) 00:32:29.155 00:32:29.155 11:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:29.155 11:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:31.061 Fill FTL, iteration 2 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=be33a52bda08543997642f443c309776 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:31.061 11:40:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:31.061 [2024-12-10 11:40:52.930062] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:31.061 [2024-12-10 11:40:52.930201] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84082 ] 00:32:31.061 [2024-12-10 11:40:53.100774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.061 [2024-12-10 11:40:53.216540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.439  [2024-12-10T11:40:55.983Z] Copying: 219/1024 [MB] (219 MBps) [2024-12-10T11:40:56.920Z] Copying: 432/1024 [MB] (213 MBps) [2024-12-10T11:40:57.858Z] Copying: 641/1024 [MB] (209 MBps) [2024-12-10T11:40:58.426Z] Copying: 853/1024 [MB] (212 MBps) [2024-12-10T11:40:59.364Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:32:37.197 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:37.197 Calculate MD5 checksum, iteration 2 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:37.197 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:37.197 [2024-12-10 11:40:59.344807] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:37.197 [2024-12-10 11:40:59.344976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84146 ] 00:32:37.456 [2024-12-10 11:40:59.522847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.456 [2024-12-10 11:40:59.603285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.363  [2024-12-10T11:41:02.467Z] Copying: 456/1024 [MB] (456 MBps) [2024-12-10T11:41:02.467Z] Copying: 913/1024 [MB] (457 MBps) [2024-12-10T11:41:03.404Z] Copying: 1024/1024 [MB] (average 457 MBps) 00:32:41.237 00:32:41.237 11:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:41.238 11:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:43.142 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:43.142 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3d0e46877f4a1a626ddaaccac6fed02d 00:32:43.142 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:43.142 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:43.142 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:43.401 [2024-12-10 11:41:05.363140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.401 [2024-12-10 11:41:05.363203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:43.401 [2024-12-10 11:41:05.363238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:43.401 [2024-12-10 11:41:05.363248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.401 [2024-12-10 11:41:05.363281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.401 [2024-12-10 11:41:05.363300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:43.401 [2024-12-10 11:41:05.363319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:43.401 [2024-12-10 11:41:05.363329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.401 [2024-12-10 11:41:05.363380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.401 [2024-12-10 11:41:05.363396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:43.401 [2024-12-10 11:41:05.363407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:43.401 [2024-12-10 11:41:05.363417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.401 [2024-12-10 11:41:05.363539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.350 ms, result 0 00:32:43.401 true 00:32:43.401 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:43.660 { 00:32:43.660 "name": "ftl", 00:32:43.660 "properties": [ 00:32:43.660 { 00:32:43.660 "name": "superblock_version", 00:32:43.660 "value": 5, 00:32:43.660 "read-only": true 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "name": "base_device", 00:32:43.660 "bands": [ 00:32:43.660 { 00:32:43.660 "id": 0, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 1, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 2, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 3, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 4, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 5, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 6, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 7, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 8, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 9, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 10, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 11, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 12, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 13, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 14, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 15, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 16, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 17, 00:32:43.660 "state": "FREE", 00:32:43.660 "validity": 0.0 00:32:43.660 } 00:32:43.660 ], 00:32:43.660 "read-only": true 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "name": "cache_device", 00:32:43.660 "type": "bdev", 00:32:43.660 "chunks": [ 00:32:43.660 { 00:32:43.660 "id": 0, 00:32:43.660 "state": "INACTIVE", 00:32:43.660 "utilization": 0.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 1, 00:32:43.660 "state": "CLOSED", 00:32:43.660 "utilization": 1.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 2, 00:32:43.660 "state": "CLOSED", 00:32:43.660 "utilization": 1.0 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 3, 00:32:43.660 "state": "OPEN", 00:32:43.660 "utilization": 0.001953125 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "id": 4, 00:32:43.660 "state": "OPEN", 00:32:43.660 "utilization": 0.0 00:32:43.660 } 00:32:43.660 ], 00:32:43.660 "read-only": true 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "name": "verbose_mode", 00:32:43.660 "value": true, 00:32:43.660 "unit": "", 00:32:43.660 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:43.660 }, 00:32:43.660 { 00:32:43.660 "name": "prep_upgrade_on_shutdown", 00:32:43.660 "value": false, 00:32:43.660 "unit": "", 00:32:43.660 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:43.660 } 00:32:43.660 ] 00:32:43.660 } 00:32:43.660 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:43.920 [2024-12-10 11:41:05.867795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.920 [2024-12-10 11:41:05.867844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:43.920 [2024-12-10 11:41:05.867862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:43.920 [2024-12-10 11:41:05.867872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.920 [2024-12-10 11:41:05.867905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.920 [2024-12-10 11:41:05.867921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:43.920 [2024-12-10 11:41:05.867932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:43.920 [2024-12-10 11:41:05.867942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.920 [2024-12-10 11:41:05.868000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.920 [2024-12-10 11:41:05.868028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:43.920 [2024-12-10 11:41:05.868050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:43.920 [2024-12-10 11:41:05.868060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.920 [2024-12-10 11:41:05.868185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.330 ms, result 0 00:32:43.920 true 00:32:43.920 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:43.920 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:43.920 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:44.179 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:44.179 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:44.179 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:44.438 [2024-12-10 11:41:06.368162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.438 [2024-12-10 11:41:06.368226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:44.438 [2024-12-10 11:41:06.368244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:44.438 [2024-12-10 11:41:06.368255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.438 [2024-12-10 11:41:06.368287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.438 [2024-12-10 11:41:06.368301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:44.438 [2024-12-10 11:41:06.368313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:44.438 [2024-12-10 11:41:06.368323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.438 [2024-12-10 11:41:06.368347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.438 [2024-12-10 11:41:06.368359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:44.438 [2024-12-10 11:41:06.368369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:44.438 [2024-12-10 11:41:06.368379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.439 [2024-12-10 11:41:06.368475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.312 ms, result 0 00:32:44.439 true 00:32:44.439 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:44.439 { 00:32:44.439 "name": "ftl", 00:32:44.439 "properties": [ 00:32:44.439 { 00:32:44.439 "name": "superblock_version", 00:32:44.439 "value": 5, 00:32:44.439 "read-only": true 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "name": "base_device", 00:32:44.439 "bands": [ 00:32:44.439 { 00:32:44.439 "id": 0, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 1, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 2, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 3, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 4, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 5, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 6, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 7, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 8, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 9, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 10, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 11, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 12, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 13, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 14, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 15, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 16, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 17, 00:32:44.439 "state": "FREE", 00:32:44.439 "validity": 0.0 00:32:44.439 } 00:32:44.439 ], 00:32:44.439 "read-only": true 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "name": "cache_device", 00:32:44.439 "type": "bdev", 00:32:44.439 "chunks": [ 00:32:44.439 { 00:32:44.439 "id": 0, 00:32:44.439 "state": "INACTIVE", 00:32:44.439 "utilization": 0.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 1, 00:32:44.439 "state": "CLOSED", 00:32:44.439 "utilization": 1.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 2, 00:32:44.439 "state": "CLOSED", 00:32:44.439 "utilization": 1.0 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 3, 00:32:44.439 "state": "OPEN", 00:32:44.439 "utilization": 0.001953125 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "id": 4, 00:32:44.439 "state": "OPEN", 00:32:44.439 "utilization": 0.0 00:32:44.439 } 00:32:44.439 ], 00:32:44.439 "read-only": true 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "name": "verbose_mode", 00:32:44.439 "value": true, 00:32:44.439 "unit": "", 00:32:44.439 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:44.439 }, 00:32:44.439 { 00:32:44.439 "name": "prep_upgrade_on_shutdown", 00:32:44.439 "value": true, 00:32:44.439 "unit": "", 00:32:44.439 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:44.439 } 00:32:44.439 ] 00:32:44.439 } 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83795 ]] 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83795 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83795 ']' 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83795 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83795 00:32:44.699 killing process with pid 83795 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83795' 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83795 00:32:44.699 11:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83795 00:32:45.267 [2024-12-10 11:41:07.384279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:45.267 [2024-12-10 11:41:07.400030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:45.267 [2024-12-10 11:41:07.400073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:45.267 [2024-12-10 11:41:07.400090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:45.267 [2024-12-10 11:41:07.400108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:45.267 [2024-12-10 11:41:07.400153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:45.267 [2024-12-10 11:41:07.402827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:45.267 [2024-12-10 11:41:07.402855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:45.267 [2024-12-10 11:41:07.402867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.654 ms 00:32:45.267 [2024-12-10 11:41:07.402882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.468095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.468180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:53.388 [2024-12-10 11:41:15.468200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8065.241 ms 00:32:53.388 [2024-12-10 11:41:15.468217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.469523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.469574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:53.388 [2024-12-10 11:41:15.469589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.283 ms 00:32:53.388 [2024-12-10 11:41:15.469600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.470824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.470869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:53.388 [2024-12-10 11:41:15.470884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.145 ms 00:32:53.388 [2024-12-10 11:41:15.470901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.481564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.481616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:53.388 [2024-12-10 11:41:15.481640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.623 ms 00:32:53.388 [2024-12-10 11:41:15.481651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.488789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.488830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:53.388 [2024-12-10 11:41:15.488845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.100 ms 00:32:53.388 [2024-12-10 11:41:15.488855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.488943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.488961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:53.388 [2024-12-10 11:41:15.488979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:32:53.388 [2024-12-10 11:41:15.488989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.499584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.499641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:53.388 [2024-12-10 11:41:15.499656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.576 ms 00:32:53.388 [2024-12-10 11:41:15.499667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.510149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.510197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:53.388 [2024-12-10 11:41:15.510210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.445 ms 00:32:53.388 [2024-12-10 11:41:15.510219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.388 [2024-12-10 11:41:15.520551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.388 [2024-12-10 11:41:15.520586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:53.389 [2024-12-10 11:41:15.520599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.295 ms 00:32:53.389 [2024-12-10 11:41:15.520608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.389 [2024-12-10 11:41:15.530830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.389 [2024-12-10 11:41:15.530878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:53.389 [2024-12-10 11:41:15.530892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.022 ms 00:32:53.389 [2024-12-10 11:41:15.530901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.389 [2024-12-10 11:41:15.530936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:53.389 [2024-12-10 11:41:15.530970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:53.389 [2024-12-10 11:41:15.530983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:53.389 [2024-12-10 11:41:15.530993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:53.389 [2024-12-10 11:41:15.531003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:53.389 [2024-12-10 11:41:15.531180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:53.389 [2024-12-10 11:41:15.531206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 331a79b2-e7ae-4e9f-9886-4cb6065a80ab 00:32:53.389 [2024-12-10 11:41:15.531217] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:53.389 [2024-12-10 11:41:15.531227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:53.389 [2024-12-10 11:41:15.531236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:53.389 [2024-12-10 11:41:15.531247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:53.389 [2024-12-10 11:41:15.531257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:53.389 [2024-12-10 11:41:15.531272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:53.389 [2024-12-10 11:41:15.531282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:53.389 [2024-12-10 11:41:15.531291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:53.389 [2024-12-10 11:41:15.531300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:53.389 [2024-12-10 11:41:15.531315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.389 [2024-12-10 11:41:15.531330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:53.389 [2024-12-10 11:41:15.531341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:32:53.389 [2024-12-10 11:41:15.531351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.389 [2024-12-10 11:41:15.544885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.389 [2024-12-10 11:41:15.544935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:53.389 [2024-12-10 11:41:15.544949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.513 ms 00:32:53.389 [2024-12-10 11:41:15.544966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.389 [2024-12-10 11:41:15.545384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.389 [2024-12-10 11:41:15.545407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:53.389 [2024-12-10 11:41:15.545420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:32:53.389 [2024-12-10 11:41:15.545430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.657 [2024-12-10 11:41:15.590459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.590501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:53.658 [2024-12-10 11:41:15.590522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.590532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.590569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.590582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:53.658 [2024-12-10 11:41:15.590592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.590601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.590721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.590740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:53.658 [2024-12-10 11:41:15.590750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.590765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.590786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.590798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:53.658 [2024-12-10 11:41:15.590823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.590847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.669327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.669381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:53.658 [2024-12-10 11:41:15.669397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.669429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.734686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.734746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:53.658 [2024-12-10 11:41:15.734761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.734771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.734861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.734878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:53.658 [2024-12-10 11:41:15.734888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.734897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.735033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:53.658 [2024-12-10 11:41:15.735060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.735070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.735221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:53.658 [2024-12-10 11:41:15.735234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.735244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.735318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:53.658 [2024-12-10 11:41:15.735330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.735340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.735398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:53.658 [2024-12-10 11:41:15.735409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.735419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.658 [2024-12-10 11:41:15.735488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:53.658 [2024-12-10 11:41:15.735499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.658 [2024-12-10 11:41:15.735509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.658 [2024-12-10 11:41:15.735640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8335.631 ms, result 0 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84350 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84350 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84350 ']' 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.993 11:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:56.993 [2024-12-10 11:41:18.627660] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:56.993 [2024-12-10 11:41:18.628351] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84350 ] 00:32:56.993 [2024-12-10 11:41:18.805930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.993 [2024-12-10 11:41:18.889030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.562 [2024-12-10 11:41:19.605094] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.562 [2024-12-10 11:41:19.605180] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.821 [2024-12-10 11:41:19.750864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.821 [2024-12-10 11:41:19.750927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:57.821 [2024-12-10 11:41:19.750961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:57.821 [2024-12-10 11:41:19.750971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.821 [2024-12-10 11:41:19.751053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.821 [2024-12-10 11:41:19.751071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:57.822 [2024-12-10 11:41:19.751081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:57.822 [2024-12-10 11:41:19.751091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.751128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:57.822 [2024-12-10 11:41:19.751982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:57.822 [2024-12-10 11:41:19.752031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.752043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:57.822 [2024-12-10 11:41:19.752053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:32:57.822 [2024-12-10 11:41:19.752063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.753284] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:57.822 [2024-12-10 11:41:19.766539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.766577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:57.822 [2024-12-10 11:41:19.766614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.256 ms 00:32:57.822 [2024-12-10 11:41:19.766623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.766715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.766735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:57.822 [2024-12-10 11:41:19.766746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:57.822 [2024-12-10 11:41:19.766755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.771052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.771085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:57.822 [2024-12-10 11:41:19.771114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.182 ms 00:32:57.822 [2024-12-10 11:41:19.771123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.771202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.771221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:57.822 [2024-12-10 11:41:19.771231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:57.822 [2024-12-10 11:41:19.771240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.771290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.771310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:57.822 [2024-12-10 11:41:19.771336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:57.822 [2024-12-10 11:41:19.771361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.771393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:57.822 [2024-12-10 11:41:19.775202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.775251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:57.822 [2024-12-10 11:41:19.775265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.816 ms 00:32:57.822 [2024-12-10 11:41:19.775280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.775318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.775333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:57.822 [2024-12-10 11:41:19.775344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:57.822 [2024-12-10 11:41:19.775354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.775398] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:57.822 [2024-12-10 11:41:19.775430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:57.822 [2024-12-10 11:41:19.775482] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:57.822 [2024-12-10 11:41:19.775501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:57.822 [2024-12-10 11:41:19.775597] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:57.822 [2024-12-10 11:41:19.775611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:57.822 [2024-12-10 11:41:19.775624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:57.822 [2024-12-10 11:41:19.775637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:57.822 [2024-12-10 11:41:19.775665] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:57.822 [2024-12-10 11:41:19.775697] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:57.822 [2024-12-10 11:41:19.775707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:57.822 [2024-12-10 11:41:19.775716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:57.822 [2024-12-10 11:41:19.775726] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:57.822 [2024-12-10 11:41:19.775737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.775747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:57.822 [2024-12-10 11:41:19.775757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:32:57.822 [2024-12-10 11:41:19.775767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.775853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.822 [2024-12-10 11:41:19.775867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:57.822 [2024-12-10 11:41:19.775883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:32:57.822 [2024-12-10 11:41:19.775892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.822 [2024-12-10 11:41:19.775998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:57.822 [2024-12-10 11:41:19.776014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:57.822 [2024-12-10 11:41:19.776025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:57.822 [2024-12-10 11:41:19.776067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:57.822 [2024-12-10 11:41:19.776089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:57.822 [2024-12-10 11:41:19.776098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:57.822 [2024-12-10 11:41:19.776146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:57.822 [2024-12-10 11:41:19.776166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:57.822 [2024-12-10 11:41:19.776174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:57.822 [2024-12-10 11:41:19.776193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:57.822 [2024-12-10 11:41:19.776202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:57.822 [2024-12-10 11:41:19.776221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:57.822 [2024-12-10 11:41:19.776230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:57.822 [2024-12-10 11:41:19.776248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:57.822 [2024-12-10 11:41:19.776257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:57.822 [2024-12-10 11:41:19.776290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:57.822 [2024-12-10 11:41:19.776300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:57.822 [2024-12-10 11:41:19.776319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:57.822 [2024-12-10 11:41:19.776328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:57.822 [2024-12-10 11:41:19.776348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:57.822 [2024-12-10 11:41:19.776357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:57.822 [2024-12-10 11:41:19.776375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:57.822 [2024-12-10 11:41:19.776384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:57.822 [2024-12-10 11:41:19.776403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:57.822 [2024-12-10 11:41:19.776445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:57.822 [2024-12-10 11:41:19.776476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:57.822 [2024-12-10 11:41:19.776485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776493] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:57.822 [2024-12-10 11:41:19.776518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:57.822 [2024-12-10 11:41:19.776528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.822 [2024-12-10 11:41:19.776537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.822 [2024-12-10 11:41:19.776552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:57.823 [2024-12-10 11:41:19.776561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:57.823 [2024-12-10 11:41:19.776570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:57.823 [2024-12-10 11:41:19.776579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:57.823 [2024-12-10 11:41:19.776588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:57.823 [2024-12-10 11:41:19.776597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:57.823 [2024-12-10 11:41:19.776607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:57.823 [2024-12-10 11:41:19.776619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:57.823 [2024-12-10 11:41:19.776639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:57.823 [2024-12-10 11:41:19.776666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:57.823 [2024-12-10 11:41:19.776675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:57.823 [2024-12-10 11:41:19.776711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:57.823 [2024-12-10 11:41:19.776722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:57.823 [2024-12-10 11:41:19.776788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:57.823 [2024-12-10 11:41:19.776798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:57.823 [2024-12-10 11:41:19.776822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:57.823 [2024-12-10 11:41:19.776831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:57.823 [2024-12-10 11:41:19.776840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:57.823 [2024-12-10 11:41:19.776851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.823 [2024-12-10 11:41:19.776860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:57.823 [2024-12-10 11:41:19.776870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:32:57.823 [2024-12-10 11:41:19.776879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.823 [2024-12-10 11:41:19.776953] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:57.823 [2024-12-10 11:41:19.776977] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:00.355 [2024-12-10 11:41:21.932590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.932700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:00.355 [2024-12-10 11:41:21.932722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2155.651 ms 00:33:00.355 [2024-12-10 11:41:21.932733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.959584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.959665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:00.355 [2024-12-10 11:41:21.959701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.605 ms 00:33:00.355 [2024-12-10 11:41:21.959711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.959849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.959875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:00.355 [2024-12-10 11:41:21.959887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:00.355 [2024-12-10 11:41:21.959896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.992872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.992919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:00.355 [2024-12-10 11:41:21.992956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.895 ms 00:33:00.355 [2024-12-10 11:41:21.992967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.993026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.993055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:00.355 [2024-12-10 11:41:21.993082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:00.355 [2024-12-10 11:41:21.993091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.993453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.993470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:00.355 [2024-12-10 11:41:21.993482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:33:00.355 [2024-12-10 11:41:21.993491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:21.993550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:21.993564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:00.355 [2024-12-10 11:41:21.993574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:00.355 [2024-12-10 11:41:21.993583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:22.008913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:22.008952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:00.355 [2024-12-10 11:41:22.008984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.306 ms 00:33:00.355 [2024-12-10 11:41:22.008994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:22.030675] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:00.355 [2024-12-10 11:41:22.030730] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:00.355 [2024-12-10 11:41:22.030764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:22.030774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:00.355 [2024-12-10 11:41:22.030785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.619 ms 00:33:00.355 [2024-12-10 11:41:22.030794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:22.045800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:22.045838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:00.355 [2024-12-10 11:41:22.045870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.958 ms 00:33:00.355 [2024-12-10 11:41:22.045880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.355 [2024-12-10 11:41:22.058491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.355 [2024-12-10 11:41:22.058528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:00.355 [2024-12-10 11:41:22.058558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.562 ms 00:33:00.355 [2024-12-10 11:41:22.058567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.071160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.071196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:00.356 [2024-12-10 11:41:22.071226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.549 ms 00:33:00.356 [2024-12-10 11:41:22.071235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.072029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.072093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:00.356 [2024-12-10 11:41:22.072144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:33:00.356 [2024-12-10 11:41:22.072157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.132400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.132488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:00.356 [2024-12-10 11:41:22.132523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 60.214 ms 00:33:00.356 [2024-12-10 11:41:22.132548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.142703] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:00.356 [2024-12-10 11:41:22.143510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.143544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:00.356 [2024-12-10 11:41:22.143558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.888 ms 00:33:00.356 [2024-12-10 11:41:22.143568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.143721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.143745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:00.356 [2024-12-10 11:41:22.143757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:00.356 [2024-12-10 11:41:22.143781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.143859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.143876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:00.356 [2024-12-10 11:41:22.143887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:00.356 [2024-12-10 11:41:22.143897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.143929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.143943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:00.356 [2024-12-10 11:41:22.143960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:00.356 [2024-12-10 11:41:22.143969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.144004] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:00.356 [2024-12-10 11:41:22.144035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.144045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:00.356 [2024-12-10 11:41:22.144066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:00.356 [2024-12-10 11:41:22.144090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.168224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.168267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:00.356 [2024-12-10 11:41:22.168299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.067 ms 00:33:00.356 [2024-12-10 11:41:22.168309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.168384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.168401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:00.356 [2024-12-10 11:41:22.168411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:00.356 [2024-12-10 11:41:22.168434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.169966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2418.590 ms, result 0 00:33:00.356 [2024-12-10 11:41:22.184609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.356 [2024-12-10 11:41:22.200647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:00.356 [2024-12-10 11:41:22.208763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:00.356 [2024-12-10 11:41:22.492916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.492959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:00.356 [2024-12-10 11:41:22.492998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:00.356 [2024-12-10 11:41:22.493022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.493052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.493065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:00.356 [2024-12-10 11:41:22.493075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:00.356 [2024-12-10 11:41:22.493084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.493106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.356 [2024-12-10 11:41:22.493117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:00.356 [2024-12-10 11:41:22.493127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:00.356 [2024-12-10 11:41:22.493135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.356 [2024-12-10 11:41:22.493216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.272 ms, result 0 00:33:00.356 true 00:33:00.356 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:00.614 { 00:33:00.614 "name": "ftl", 00:33:00.614 "properties": [ 00:33:00.614 { 00:33:00.614 "name": "superblock_version", 00:33:00.614 "value": 5, 00:33:00.614 "read-only": true 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "name": "base_device", 00:33:00.614 "bands": [ 00:33:00.614 { 00:33:00.614 "id": 0, 00:33:00.614 "state": "CLOSED", 00:33:00.614 "validity": 1.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 1, 00:33:00.614 "state": "CLOSED", 00:33:00.614 "validity": 1.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 2, 00:33:00.614 "state": "CLOSED", 00:33:00.614 "validity": 0.007843137254901933 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 3, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 4, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 5, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 6, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 7, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 8, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 9, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 10, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 11, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 12, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 13, 00:33:00.614 "state": "FREE", 00:33:00.614 "validity": 0.0 00:33:00.614 }, 00:33:00.614 { 00:33:00.614 "id": 14, 00:33:00.614 "state": "FREE", 00:33:00.615 "validity": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 15, 00:33:00.615 "state": "FREE", 00:33:00.615 "validity": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 16, 00:33:00.615 "state": "FREE", 00:33:00.615 "validity": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 17, 00:33:00.615 "state": "FREE", 00:33:00.615 "validity": 0.0 00:33:00.615 } 00:33:00.615 ], 00:33:00.615 "read-only": true 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "name": "cache_device", 00:33:00.615 "type": "bdev", 00:33:00.615 "chunks": [ 00:33:00.615 { 00:33:00.615 "id": 0, 00:33:00.615 "state": "INACTIVE", 00:33:00.615 "utilization": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 1, 00:33:00.615 "state": "OPEN", 00:33:00.615 "utilization": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 2, 00:33:00.615 "state": "OPEN", 00:33:00.615 "utilization": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 3, 00:33:00.615 "state": "FREE", 00:33:00.615 "utilization": 0.0 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "id": 4, 00:33:00.615 "state": "FREE", 00:33:00.615 "utilization": 0.0 00:33:00.615 } 00:33:00.615 ], 00:33:00.615 "read-only": true 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "name": "verbose_mode", 00:33:00.615 "value": true, 00:33:00.615 "unit": "", 00:33:00.615 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:00.615 }, 00:33:00.615 { 00:33:00.615 "name": "prep_upgrade_on_shutdown", 00:33:00.615 "value": false, 00:33:00.615 "unit": "", 00:33:00.615 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:00.615 } 00:33:00.615 ] 00:33:00.615 } 00:33:00.615 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:00.615 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:00.615 11:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:00.873 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:00.873 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:00.873 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:00.873 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:00.873 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:01.132 Validate MD5 checksum, iteration 1 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:01.132 11:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:01.391 [2024-12-10 11:41:23.388263] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:01.391 [2024-12-10 11:41:23.388439] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84419 ] 00:33:01.649 [2024-12-10 11:41:23.573339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.649 [2024-12-10 11:41:23.696715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.551  [2024-12-10T11:41:26.285Z] Copying: 489/1024 [MB] (489 MBps) [2024-12-10T11:41:26.543Z] Copying: 957/1024 [MB] (468 MBps) [2024-12-10T11:41:27.478Z] Copying: 1024/1024 [MB] (average 477 MBps) 00:33:05.311 00:33:05.311 11:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:05.311 11:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:07.212 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:07.212 Validate MD5 checksum, iteration 2 00:33:07.212 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=be33a52bda08543997642f443c309776 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ be33a52bda08543997642f443c309776 != \b\e\3\3\a\5\2\b\d\a\0\8\5\4\3\9\9\7\6\4\2\f\4\4\3\c\3\0\9\7\7\6 ]] 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:07.213 11:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:07.471 [2024-12-10 11:41:29.396854] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:07.471 [2024-12-10 11:41:29.397994] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84484 ] 00:33:07.471 [2024-12-10 11:41:29.590887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.730 [2024-12-10 11:41:29.713478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.104  [2024-12-10T11:41:32.646Z] Copying: 507/1024 [MB] (507 MBps) [2024-12-10T11:41:32.646Z] Copying: 971/1024 [MB] (464 MBps) [2024-12-10T11:41:33.213Z] Copying: 1024/1024 [MB] (average 481 MBps) 00:33:11.046 00:33:11.046 11:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:11.046 11:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3d0e46877f4a1a626ddaaccac6fed02d 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3d0e46877f4a1a626ddaaccac6fed02d != \3\d\0\e\4\6\8\7\7\f\4\a\1\a\6\2\6\d\d\a\a\c\c\a\c\6\f\e\d\0\2\d ]] 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84350 ]] 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84350 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84547 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:12.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84547 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84547 ']' 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.948 11:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:13.207 [2024-12-10 11:41:35.171239] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:13.207 [2024-12-10 11:41:35.171690] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84547 ] 00:33:13.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84350 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:13.207 [2024-12-10 11:41:35.351260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.465 [2024-12-10 11:41:35.439705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.033 [2024-12-10 11:41:36.180621] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:14.033 [2024-12-10 11:41:36.180988] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:14.293 [2024-12-10 11:41:36.325885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.293 [2024-12-10 11:41:36.326095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:14.294 [2024-12-10 11:41:36.326123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:14.294 [2024-12-10 11:41:36.326135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.326215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.326233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:14.294 [2024-12-10 11:41:36.326244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:33:14.294 [2024-12-10 11:41:36.326254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.326294] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:14.294 [2024-12-10 11:41:36.327253] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:14.294 [2024-12-10 11:41:36.327286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.327298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:14.294 [2024-12-10 11:41:36.327309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.007 ms 00:33:14.294 [2024-12-10 11:41:36.327319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.327800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:14.294 [2024-12-10 11:41:36.344703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.344755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:14.294 [2024-12-10 11:41:36.344772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.904 ms 00:33:14.294 [2024-12-10 11:41:36.344781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.354415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.354462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:14.294 [2024-12-10 11:41:36.354493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:14.294 [2024-12-10 11:41:36.354503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.355027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.355086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:14.294 [2024-12-10 11:41:36.355119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:33:14.294 [2024-12-10 11:41:36.355136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.355211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.355235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:14.294 [2024-12-10 11:41:36.355247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:14.294 [2024-12-10 11:41:36.355257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.355291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.355306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:14.294 [2024-12-10 11:41:36.355317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:14.294 [2024-12-10 11:41:36.355327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.355357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:14.294 [2024-12-10 11:41:36.358865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.359065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:14.294 [2024-12-10 11:41:36.359189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.515 ms 00:33:14.294 [2024-12-10 11:41:36.359238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.359309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.359428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:14.294 [2024-12-10 11:41:36.359490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:14.294 [2024-12-10 11:41:36.359525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.359598] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:14.294 [2024-12-10 11:41:36.359808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:14.294 [2024-12-10 11:41:36.359899] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:14.294 [2024-12-10 11:41:36.360045] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:14.294 [2024-12-10 11:41:36.360216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:14.294 [2024-12-10 11:41:36.360282] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:14.294 [2024-12-10 11:41:36.360404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:14.294 [2024-12-10 11:41:36.360546] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:14.294 [2024-12-10 11:41:36.360667] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:14.294 [2024-12-10 11:41:36.360797] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:14.294 [2024-12-10 11:41:36.360889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:14.294 [2024-12-10 11:41:36.360932] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:14.294 [2024-12-10 11:41:36.361049] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:14.294 [2024-12-10 11:41:36.361078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.361089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:14.294 [2024-12-10 11:41:36.361100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.482 ms 00:33:14.294 [2024-12-10 11:41:36.361110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.361205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.294 [2024-12-10 11:41:36.361219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:14.294 [2024-12-10 11:41:36.361230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:33:14.294 [2024-12-10 11:41:36.361240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.294 [2024-12-10 11:41:36.361351] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:14.294 [2024-12-10 11:41:36.361372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:14.294 [2024-12-10 11:41:36.361382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:14.294 [2024-12-10 11:41:36.361393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:14.294 [2024-12-10 11:41:36.361414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:14.294 [2024-12-10 11:41:36.361432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:14.294 [2024-12-10 11:41:36.361441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:14.294 [2024-12-10 11:41:36.361450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:14.294 [2024-12-10 11:41:36.361467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:14.294 [2024-12-10 11:41:36.361476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:14.294 [2024-12-10 11:41:36.361493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:14.294 [2024-12-10 11:41:36.361502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:14.294 [2024-12-10 11:41:36.361519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:14.294 [2024-12-10 11:41:36.361527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.361536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:14.294 [2024-12-10 11:41:36.361545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:14.294 [2024-12-10 11:41:36.361565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:14.294 [2024-12-10 11:41:36.361574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:14.294 [2024-12-10 11:41:36.361583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:14.294 [2024-12-10 11:41:36.361591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:14.294 [2024-12-10 11:41:36.361600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:14.294 [2024-12-10 11:41:36.361608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:14.294 [2024-12-10 11:41:36.361617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:14.294 [2024-12-10 11:41:36.361625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:14.294 [2024-12-10 11:41:36.361635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:14.294 [2024-12-10 11:41:36.361801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:14.294 [2024-12-10 11:41:36.361851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:14.294 [2024-12-10 11:41:36.361888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:14.294 [2024-12-10 11:41:36.362001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.362102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:14.294 [2024-12-10 11:41:36.362147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:14.294 [2024-12-10 11:41:36.362183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.362280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:14.294 [2024-12-10 11:41:36.362326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:14.294 [2024-12-10 11:41:36.362360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.362393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:14.294 [2024-12-10 11:41:36.362491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:14.294 [2024-12-10 11:41:36.362504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.294 [2024-12-10 11:41:36.362514] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:14.295 [2024-12-10 11:41:36.362524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:14.295 [2024-12-10 11:41:36.362533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:14.295 [2024-12-10 11:41:36.362543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:14.295 [2024-12-10 11:41:36.362552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:14.295 [2024-12-10 11:41:36.362562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:14.295 [2024-12-10 11:41:36.362570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:14.295 [2024-12-10 11:41:36.362579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:14.295 [2024-12-10 11:41:36.362588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:14.295 [2024-12-10 11:41:36.362597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:14.295 [2024-12-10 11:41:36.362607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:14.295 [2024-12-10 11:41:36.362620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:14.295 [2024-12-10 11:41:36.362658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:14.295 [2024-12-10 11:41:36.362687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:14.295 [2024-12-10 11:41:36.362696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:14.295 [2024-12-10 11:41:36.362705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:14.295 [2024-12-10 11:41:36.362715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:14.295 [2024-12-10 11:41:36.362782] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:14.295 [2024-12-10 11:41:36.362793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:14.295 [2024-12-10 11:41:36.362821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:14.295 [2024-12-10 11:41:36.362830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:14.295 [2024-12-10 11:41:36.362840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:14.295 [2024-12-10 11:41:36.362852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.362862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:14.295 [2024-12-10 11:41:36.362872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.574 ms 00:33:14.295 [2024-12-10 11:41:36.362882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.388277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.388497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:14.295 [2024-12-10 11:41:36.388702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.326 ms 00:33:14.295 [2024-12-10 11:41:36.388753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.388904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.388957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:14.295 [2024-12-10 11:41:36.389099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:14.295 [2024-12-10 11:41:36.389159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.421454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.421707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:14.295 [2024-12-10 11:41:36.421822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.109 ms 00:33:14.295 [2024-12-10 11:41:36.421869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.422018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.422151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:14.295 [2024-12-10 11:41:36.422249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:14.295 [2024-12-10 11:41:36.422369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.422575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.422642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:14.295 [2024-12-10 11:41:36.422746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:33:14.295 [2024-12-10 11:41:36.422856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.422950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.423071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:14.295 [2024-12-10 11:41:36.423119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:14.295 [2024-12-10 11:41:36.423153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.438400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.438581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:14.295 [2024-12-10 11:41:36.438762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.064 ms 00:33:14.295 [2024-12-10 11:41:36.438820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.295 [2024-12-10 11:41:36.438998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.295 [2024-12-10 11:41:36.439057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:14.295 [2024-12-10 11:41:36.439074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:14.295 [2024-12-10 11:41:36.439098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.464783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.464824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:14.555 [2024-12-10 11:41:36.464871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.655 ms 00:33:14.555 [2024-12-10 11:41:36.464882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.478651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.478687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:14.555 [2024-12-10 11:41:36.478712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.688 ms 00:33:14.555 [2024-12-10 11:41:36.478745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.546860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.546926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:14.555 [2024-12-10 11:41:36.546944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.003 ms 00:33:14.555 [2024-12-10 11:41:36.546954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.547154] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:14.555 [2024-12-10 11:41:36.547283] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:14.555 [2024-12-10 11:41:36.547409] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:14.555 [2024-12-10 11:41:36.547531] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:14.555 [2024-12-10 11:41:36.547543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.547553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:14.555 [2024-12-10 11:41:36.547564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:33:14.555 [2024-12-10 11:41:36.547573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.547756] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:14.555 [2024-12-10 11:41:36.547778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.547810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:14.555 [2024-12-10 11:41:36.547822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:14.555 [2024-12-10 11:41:36.547831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.565389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.565428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:14.555 [2024-12-10 11:41:36.565443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.486 ms 00:33:14.555 [2024-12-10 11:41:36.565453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.575965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.575996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:14.555 [2024-12-10 11:41:36.576009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:14.555 [2024-12-10 11:41:36.576019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:14.555 [2024-12-10 11:41:36.576189] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:14.555 [2024-12-10 11:41:36.576356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:14.555 [2024-12-10 11:41:36.576385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:14.555 [2024-12-10 11:41:36.576396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:33:14.555 [2024-12-10 11:41:36.576405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.123 [2024-12-10 11:41:37.185738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.123 [2024-12-10 11:41:37.186091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:15.123 [2024-12-10 11:41:37.186123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 608.115 ms 00:33:15.123 [2024-12-10 11:41:37.186135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.123 [2024-12-10 11:41:37.200449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.123 [2024-12-10 11:41:37.200692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:15.123 [2024-12-10 11:41:37.200720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.886 ms 00:33:15.123 [2024-12-10 11:41:37.200732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.123 [2024-12-10 11:41:37.201174] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:15.123 [2024-12-10 11:41:37.201202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.123 [2024-12-10 11:41:37.201215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:15.123 [2024-12-10 11:41:37.201228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:33:15.123 [2024-12-10 11:41:37.201239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.123 [2024-12-10 11:41:37.201283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.123 [2024-12-10 11:41:37.201300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:15.123 [2024-12-10 11:41:37.201312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:15.123 [2024-12-10 11:41:37.201360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.124 [2024-12-10 11:41:37.201436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 625.235 ms, result 0 00:33:15.124 [2024-12-10 11:41:37.201495] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:15.124 [2024-12-10 11:41:37.201589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.124 [2024-12-10 11:41:37.201603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:15.124 [2024-12-10 11:41:37.201613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:33:15.124 [2024-12-10 11:41:37.201622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.796993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.797341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:15.695 [2024-12-10 11:41:37.797406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 594.331 ms 00:33:15.695 [2024-12-10 11:41:37.797418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.801887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.802065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:15.695 [2024-12-10 11:41:37.802226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.939 ms 00:33:15.695 [2024-12-10 11:41:37.802276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.802841] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:15.695 [2024-12-10 11:41:37.803043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.803178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:15.695 [2024-12-10 11:41:37.803227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.634 ms 00:33:15.695 [2024-12-10 11:41:37.803324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.803489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.803546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:15.695 [2024-12-10 11:41:37.803736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:15.695 [2024-12-10 11:41:37.803784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.803867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 602.365 ms, result 0 00:33:15.695 [2024-12-10 11:41:37.804104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:15.695 [2024-12-10 11:41:37.804264] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:15.695 [2024-12-10 11:41:37.804404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.804528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:15.695 [2024-12-10 11:41:37.804644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1228.280 ms 00:33:15.695 [2024-12-10 11:41:37.804695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.804889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.804950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:15.695 [2024-12-10 11:41:37.805107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:15.695 [2024-12-10 11:41:37.805154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.815907] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:15.695 [2024-12-10 11:41:37.816226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.816287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:15.695 [2024-12-10 11:41:37.816501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.019 ms 00:33:15.695 [2024-12-10 11:41:37.816563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.817411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.817557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:15.695 [2024-12-10 11:41:37.817680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.575 ms 00:33:15.695 [2024-12-10 11:41:37.817728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.820621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.820799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:15.695 [2024-12-10 11:41:37.820904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.785 ms 00:33:15.695 [2024-12-10 11:41:37.821012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.821148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.821203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:15.695 [2024-12-10 11:41:37.821298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:15.695 [2024-12-10 11:41:37.821326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.821450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.821468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:15.695 [2024-12-10 11:41:37.821479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:15.695 [2024-12-10 11:41:37.821490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.821516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.821527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:15.695 [2024-12-10 11:41:37.821537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:15.695 [2024-12-10 11:41:37.821547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.821604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:15.695 [2024-12-10 11:41:37.821620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.821629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:15.695 [2024-12-10 11:41:37.821639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:15.695 [2024-12-10 11:41:37.821666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.821722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:15.695 [2024-12-10 11:41:37.821735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:15.695 [2024-12-10 11:41:37.821746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:15.695 [2024-12-10 11:41:37.821755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:15.695 [2024-12-10 11:41:37.823132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1496.566 ms, result 0 00:33:15.695 [2024-12-10 11:41:37.837364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.695 [2024-12-10 11:41:37.853356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:15.954 [2024-12-10 11:41:37.862018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:15.954 Validate MD5 checksum, iteration 1 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:15.954 11:41:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:15.955 [2024-12-10 11:41:38.003205] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:15.955 [2024-12-10 11:41:38.003644] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84577 ] 00:33:16.213 [2024-12-10 11:41:38.190904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.213 [2024-12-10 11:41:38.314546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.118  [2024-12-10T11:41:40.854Z] Copying: 493/1024 [MB] (493 MBps) [2024-12-10T11:41:41.112Z] Copying: 975/1024 [MB] (482 MBps) [2024-12-10T11:41:42.487Z] Copying: 1024/1024 [MB] (average 487 MBps) 00:33:20.320 00:33:20.320 11:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:20.320 11:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=be33a52bda08543997642f443c309776 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ be33a52bda08543997642f443c309776 != \b\e\3\3\a\5\2\b\d\a\0\8\5\4\3\9\9\7\6\4\2\f\4\4\3\c\3\0\9\7\7\6 ]] 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:22.228 Validate MD5 checksum, iteration 2 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:22.228 11:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:22.228 [2024-12-10 11:41:43.990803] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:22.228 [2024-12-10 11:41:43.990958] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84644 ] 00:33:22.228 [2024-12-10 11:41:44.160932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.228 [2024-12-10 11:41:44.283993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.167  [2024-12-10T11:41:46.902Z] Copying: 496/1024 [MB] (496 MBps) [2024-12-10T11:41:46.902Z] Copying: 986/1024 [MB] (490 MBps) [2024-12-10T11:41:47.837Z] Copying: 1024/1024 [MB] (average 493 MBps) 00:33:25.670 00:33:25.670 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:25.670 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3d0e46877f4a1a626ddaaccac6fed02d 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3d0e46877f4a1a626ddaaccac6fed02d != \3\d\0\e\4\6\8\7\7\f\4\a\1\a\6\2\6\d\d\a\a\c\c\a\c\6\f\e\d\0\2\d ]] 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:27.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84547 ]] 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84547 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84547 ']' 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84547 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84547 00:33:27.832 killing process with pid 84547 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84547' 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84547 00:33:27.832 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84547 00:33:28.401 [2024-12-10 11:41:50.550466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:28.401 [2024-12-10 11:41:50.564254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.401 [2024-12-10 11:41:50.564316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:28.401 [2024-12-10 11:41:50.564334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:28.401 [2024-12-10 11:41:50.564344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.401 [2024-12-10 11:41:50.564373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:28.661 [2024-12-10 11:41:50.567538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.567619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:28.661 [2024-12-10 11:41:50.567660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.144 ms 00:33:28.661 [2024-12-10 11:41:50.567672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.567951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.567980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:28.661 [2024-12-10 11:41:50.567994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:33:28.661 [2024-12-10 11:41:50.568005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.569398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.569466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:28.661 [2024-12-10 11:41:50.569511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.356 ms 00:33:28.661 [2024-12-10 11:41:50.569544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.570856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.570887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:28.661 [2024-12-10 11:41:50.570899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.272 ms 00:33:28.661 [2024-12-10 11:41:50.570909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.581801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.581850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:28.661 [2024-12-10 11:41:50.581888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.856 ms 00:33:28.661 [2024-12-10 11:41:50.581898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.587728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.587779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:28.661 [2024-12-10 11:41:50.587809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.792 ms 00:33:28.661 [2024-12-10 11:41:50.587819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.587891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.587908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:28.661 [2024-12-10 11:41:50.587919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:28.661 [2024-12-10 11:41:50.587935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.598295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.598344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:28.661 [2024-12-10 11:41:50.598373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.341 ms 00:33:28.661 [2024-12-10 11:41:50.598382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.609196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.609243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:28.661 [2024-12-10 11:41:50.609272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.779 ms 00:33:28.661 [2024-12-10 11:41:50.609281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.619905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.619956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:28.661 [2024-12-10 11:41:50.619970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.589 ms 00:33:28.661 [2024-12-10 11:41:50.619979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.630064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.630111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:28.661 [2024-12-10 11:41:50.630140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.021 ms 00:33:28.661 [2024-12-10 11:41:50.630148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.630185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:28.661 [2024-12-10 11:41:50.630205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:28.661 [2024-12-10 11:41:50.630217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:28.661 [2024-12-10 11:41:50.630227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:28.661 [2024-12-10 11:41:50.630237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:28.661 [2024-12-10 11:41:50.630410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:28.661 [2024-12-10 11:41:50.630420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 331a79b2-e7ae-4e9f-9886-4cb6065a80ab 00:33:28.661 [2024-12-10 11:41:50.630430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:28.661 [2024-12-10 11:41:50.630439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:28.661 [2024-12-10 11:41:50.630448] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:28.661 [2024-12-10 11:41:50.630458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:28.661 [2024-12-10 11:41:50.630467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:28.661 [2024-12-10 11:41:50.630476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:28.661 [2024-12-10 11:41:50.630494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:28.661 [2024-12-10 11:41:50.630502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:28.661 [2024-12-10 11:41:50.630510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:28.661 [2024-12-10 11:41:50.630520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.630530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:28.661 [2024-12-10 11:41:50.630542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:33:28.661 [2024-12-10 11:41:50.630552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.646075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.646124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:28.661 [2024-12-10 11:41:50.646138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.503 ms 00:33:28.661 [2024-12-10 11:41:50.646148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.646540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.661 [2024-12-10 11:41:50.646564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:28.661 [2024-12-10 11:41:50.646576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:33:28.661 [2024-12-10 11:41:50.646586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.692709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.661 [2024-12-10 11:41:50.692752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:28.661 [2024-12-10 11:41:50.692781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.661 [2024-12-10 11:41:50.692797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.692836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.661 [2024-12-10 11:41:50.692849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:28.661 [2024-12-10 11:41:50.692860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.661 [2024-12-10 11:41:50.692869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.661 [2024-12-10 11:41:50.693014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.662 [2024-12-10 11:41:50.693057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:28.662 [2024-12-10 11:41:50.693068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.662 [2024-12-10 11:41:50.693078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.662 [2024-12-10 11:41:50.693108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.662 [2024-12-10 11:41:50.693121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:28.662 [2024-12-10 11:41:50.693143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.662 [2024-12-10 11:41:50.693153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.662 [2024-12-10 11:41:50.772659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.662 [2024-12-10 11:41:50.772710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:28.662 [2024-12-10 11:41:50.772740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.662 [2024-12-10 11:41:50.772750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:28.921 [2024-12-10 11:41:50.839298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:28.921 [2024-12-10 11:41:50.839433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:28.921 [2024-12-10 11:41:50.839609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:28.921 [2024-12-10 11:41:50.839787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:28.921 [2024-12-10 11:41:50.839885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.839937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.839951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:28.921 [2024-12-10 11:41:50.839962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.839971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.840019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.921 [2024-12-10 11:41:50.840040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:28.921 [2024-12-10 11:41:50.840051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.921 [2024-12-10 11:41:50.840061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.921 [2024-12-10 11:41:50.840242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 275.965 ms, result 0 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:29.859 Remove shared memory files 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84350 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:29.859 ************************************ 00:33:29.859 END TEST ftl_upgrade_shutdown 00:33:29.859 ************************************ 00:33:29.859 00:33:29.859 real 1m22.227s 00:33:29.859 user 1m58.230s 00:33:29.859 sys 0m21.008s 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.859 11:41:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:29.859 Process with pid 76963 is not found 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@14 -- # killprocess 76963 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@954 -- # '[' -z 76963 ']' 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@958 -- # kill -0 76963 00:33:29.859 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76963) - No such process 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76963 is not found' 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84758 00:33:29.859 11:41:51 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84758 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@835 -- # '[' -z 84758 ']' 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.859 11:41:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:29.859 [2024-12-10 11:41:51.942439] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:29.859 [2024-12-10 11:41:51.942618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84758 ] 00:33:30.119 [2024-12-10 11:41:52.119100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.119 [2024-12-10 11:41:52.199007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.689 11:41:52 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.689 11:41:52 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:30.689 11:41:52 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:31.257 nvme0n1 00:33:31.257 11:41:53 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:31.257 11:41:53 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:31.257 11:41:53 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:31.257 11:41:53 ftl -- ftl/common.sh@28 -- # stores=9deaed7a-3edf-4d08-924c-5878d0de9987 00:33:31.257 11:41:53 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:31.257 11:41:53 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9deaed7a-3edf-4d08-924c-5878d0de9987 00:33:31.516 11:41:53 ftl -- ftl/ftl.sh@23 -- # killprocess 84758 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@954 -- # '[' -z 84758 ']' 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@958 -- # kill -0 84758 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@959 -- # uname 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84758 00:33:31.516 killing process with pid 84758 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84758' 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@973 -- # kill 84758 00:33:31.516 11:41:53 ftl -- common/autotest_common.sh@978 -- # wait 84758 00:33:33.420 11:41:55 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:33.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:33.420 Waiting for block devices as requested 00:33:33.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.679 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.679 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.938 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:39.212 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:39.212 Remove shared memory files 00:33:39.212 11:42:00 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:39.212 11:42:00 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:39.212 11:42:00 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:39.212 11:42:00 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:39.212 11:42:00 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:39.212 11:42:00 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:39.212 11:42:00 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:39.212 00:33:39.212 real 11m47.281s 00:33:39.212 user 14m45.421s 00:33:39.212 sys 1m26.845s 00:33:39.212 11:42:00 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.212 11:42:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:39.212 ************************************ 00:33:39.212 END TEST ftl 00:33:39.212 ************************************ 00:33:39.212 11:42:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:39.212 11:42:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:39.212 11:42:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:39.212 11:42:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:39.212 11:42:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:39.212 11:42:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:39.212 11:42:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:39.212 11:42:01 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:39.212 11:42:01 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:39.212 11:42:01 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:39.212 11:42:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.212 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:33:39.212 11:42:01 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:39.212 11:42:01 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:39.212 11:42:01 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:39.212 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:33:40.588 INFO: APP EXITING 00:33:40.588 INFO: killing all VMs 00:33:40.588 INFO: killing vhost app 00:33:40.588 INFO: EXIT DONE 00:33:41.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:41.415 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:41.415 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:41.415 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:41.415 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:41.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:42.242 Cleaning 00:33:42.242 Removing: /var/run/dpdk/spdk0/config 00:33:42.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:42.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:42.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:42.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:42.243 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:42.243 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:42.243 Removing: /var/run/dpdk/spdk0 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58069 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58299 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58528 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58632 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58677 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58811 00:33:42.243 Removing: /var/run/dpdk/spdk_pid58834 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59034 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59150 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59252 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59377 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59485 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59524 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59561 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59639 00:33:42.243 Removing: /var/run/dpdk/spdk_pid59736 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60205 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60280 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60349 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60370 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60500 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60527 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60662 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60678 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60742 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60766 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60830 00:33:42.243 Removing: /var/run/dpdk/spdk_pid60852 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61043 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61085 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61173 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61357 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61452 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61494 00:33:42.243 Removing: /var/run/dpdk/spdk_pid61970 00:33:42.243 Removing: /var/run/dpdk/spdk_pid62074 00:33:42.243 Removing: /var/run/dpdk/spdk_pid62184 00:33:42.243 Removing: /var/run/dpdk/spdk_pid62243 00:33:42.243 Removing: /var/run/dpdk/spdk_pid62267 00:33:42.502 Removing: /var/run/dpdk/spdk_pid62351 00:33:42.502 Removing: /var/run/dpdk/spdk_pid62984 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63026 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63550 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63648 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63767 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63826 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63847 00:33:42.502 Removing: /var/run/dpdk/spdk_pid63878 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65765 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65908 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65918 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65935 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65976 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65980 00:33:42.502 Removing: /var/run/dpdk/spdk_pid65992 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66037 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66046 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66058 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66103 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66107 00:33:42.502 Removing: /var/run/dpdk/spdk_pid66119 00:33:42.502 Removing: /var/run/dpdk/spdk_pid67520 00:33:42.502 Removing: /var/run/dpdk/spdk_pid67636 00:33:42.502 Removing: /var/run/dpdk/spdk_pid69054 00:33:42.502 Removing: /var/run/dpdk/spdk_pid70779 00:33:42.502 Removing: /var/run/dpdk/spdk_pid70859 00:33:42.502 Removing: /var/run/dpdk/spdk_pid70940 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71044 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71151 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71247 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71325 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71406 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71516 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71612 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71709 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71789 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71864 00:33:42.502 Removing: /var/run/dpdk/spdk_pid71974 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72066 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72166 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72247 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72317 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72427 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72528 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72627 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72708 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72784 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72863 00:33:42.502 Removing: /var/run/dpdk/spdk_pid72942 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73045 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73136 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73231 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73315 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73391 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73465 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73546 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73650 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73745 00:33:42.502 Removing: /var/run/dpdk/spdk_pid73892 00:33:42.502 Removing: /var/run/dpdk/spdk_pid74182 00:33:42.502 Removing: /var/run/dpdk/spdk_pid74213 00:33:42.502 Removing: /var/run/dpdk/spdk_pid74694 00:33:42.502 Removing: /var/run/dpdk/spdk_pid74882 00:33:42.502 Removing: /var/run/dpdk/spdk_pid74976 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75092 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75146 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75171 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75462 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75528 00:33:42.502 Removing: /var/run/dpdk/spdk_pid75614 00:33:42.502 Removing: /var/run/dpdk/spdk_pid76031 00:33:42.502 Removing: /var/run/dpdk/spdk_pid76172 00:33:42.502 Removing: /var/run/dpdk/spdk_pid76963 00:33:42.502 Removing: /var/run/dpdk/spdk_pid77107 00:33:42.502 Removing: /var/run/dpdk/spdk_pid77299 00:33:42.502 Removing: /var/run/dpdk/spdk_pid77402 00:33:42.502 Removing: /var/run/dpdk/spdk_pid77758 00:33:42.502 Removing: /var/run/dpdk/spdk_pid78042 00:33:42.502 Removing: /var/run/dpdk/spdk_pid78394 00:33:42.502 Removing: /var/run/dpdk/spdk_pid78589 00:33:42.502 Removing: /var/run/dpdk/spdk_pid78726 00:33:42.502 Removing: /var/run/dpdk/spdk_pid78793 00:33:42.762 Removing: /var/run/dpdk/spdk_pid78931 00:33:42.762 Removing: /var/run/dpdk/spdk_pid78963 00:33:42.762 Removing: /var/run/dpdk/spdk_pid79028 00:33:42.762 Removing: /var/run/dpdk/spdk_pid79237 00:33:42.762 Removing: /var/run/dpdk/spdk_pid79475 00:33:42.762 Removing: /var/run/dpdk/spdk_pid79872 00:33:42.762 Removing: /var/run/dpdk/spdk_pid80332 00:33:42.762 Removing: /var/run/dpdk/spdk_pid80774 00:33:42.762 Removing: /var/run/dpdk/spdk_pid81317 00:33:42.762 Removing: /var/run/dpdk/spdk_pid81460 00:33:42.762 Removing: /var/run/dpdk/spdk_pid81559 00:33:42.762 Removing: /var/run/dpdk/spdk_pid82263 00:33:42.762 Removing: /var/run/dpdk/spdk_pid82346 00:33:42.762 Removing: /var/run/dpdk/spdk_pid82823 00:33:42.762 Removing: /var/run/dpdk/spdk_pid83243 00:33:42.762 Removing: /var/run/dpdk/spdk_pid83795 00:33:42.762 Removing: /var/run/dpdk/spdk_pid83912 00:33:42.762 Removing: /var/run/dpdk/spdk_pid83954 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84024 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84082 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84146 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84350 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84419 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84484 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84547 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84577 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84644 00:33:42.762 Removing: /var/run/dpdk/spdk_pid84758 00:33:42.762 Clean 00:33:42.762 11:42:04 -- common/autotest_common.sh@1453 -- # return 0 00:33:42.762 11:42:04 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:42.762 11:42:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.762 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:33:42.762 11:42:04 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:42.762 11:42:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.762 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:33:42.762 11:42:04 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:42.762 11:42:04 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:42.762 11:42:04 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:42.762 11:42:04 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:42.762 11:42:04 -- spdk/autotest.sh@398 -- # hostname 00:33:42.762 11:42:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:43.024 geninfo: WARNING: invalid characters removed from testname! 00:34:04.954 11:42:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:08.275 11:42:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:11.563 11:42:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:13.465 11:42:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:15.998 11:42:37 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:18.529 11:42:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:21.061 11:42:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:21.061 11:42:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:21.061 11:42:42 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:21.061 11:42:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:21.061 11:42:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:21.061 11:42:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:21.061 + [[ -n 5308 ]] 00:34:21.061 + sudo kill 5308 00:34:21.070 [Pipeline] } 00:34:21.085 [Pipeline] // timeout 00:34:21.090 [Pipeline] } 00:34:21.104 [Pipeline] // stage 00:34:21.110 [Pipeline] } 00:34:21.120 [Pipeline] // catchError 00:34:21.128 [Pipeline] stage 00:34:21.130 [Pipeline] { (Stop VM) 00:34:21.142 [Pipeline] sh 00:34:21.422 + vagrant halt 00:34:23.951 ==> default: Halting domain... 00:34:30.526 [Pipeline] sh 00:34:30.804 + vagrant destroy -f 00:34:33.332 ==> default: Removing domain... 00:34:33.910 [Pipeline] sh 00:34:34.188 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:34.200 [Pipeline] } 00:34:34.214 [Pipeline] // stage 00:34:34.219 [Pipeline] } 00:34:34.232 [Pipeline] // dir 00:34:34.237 [Pipeline] } 00:34:34.250 [Pipeline] // wrap 00:34:34.256 [Pipeline] } 00:34:34.268 [Pipeline] // catchError 00:34:34.276 [Pipeline] stage 00:34:34.277 [Pipeline] { (Epilogue) 00:34:34.289 [Pipeline] sh 00:34:34.568 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:39.912 [Pipeline] catchError 00:34:39.914 [Pipeline] { 00:34:39.926 [Pipeline] sh 00:34:40.206 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:40.465 Artifacts sizes are good 00:34:40.473 [Pipeline] } 00:34:40.485 [Pipeline] // catchError 00:34:40.494 [Pipeline] archiveArtifacts 00:34:40.501 Archiving artifacts 00:34:40.605 [Pipeline] cleanWs 00:34:40.617 [WS-CLEANUP] Deleting project workspace... 00:34:40.617 [WS-CLEANUP] Deferred wipeout is used... 00:34:40.623 [WS-CLEANUP] done 00:34:40.625 [Pipeline] } 00:34:40.640 [Pipeline] // stage 00:34:40.645 [Pipeline] } 00:34:40.660 [Pipeline] // node 00:34:40.665 [Pipeline] End of Pipeline 00:34:40.698 Finished: SUCCESS